Dynamic Data Analysis PDF
Dynamic Data Analysis PDF
Data Analysis
www.EngineeringBooksPdf.com
Olivier Houzé (OH) - Author
Engineering Degree from Ecole Polytechnique (France, 1982) and MSc in Petroleum
Engineering from Stanford University (USA, 1983). After four years as a Field Engineer
with Flopetrol Schlumberger, co-founded KAPPA in 1987 and has been its Managing
Director since 1991. Author of several papers on Pressure Transient Analysis and
co-author of the 2009 SPE monograph on Pressure Transient Testing.
Simon Trin (ST) - Project Coordinator (the one who lost sleep)
Engineering Degree from Ecole Supérieure de l’Electricité (France, 2006), Msc and
Engineer’s degree in Petroleum Engineering from Imperial College (United Kingdom,
2006). After two years as a Field Reservoir Engineer with Shell International, joined
KAPPA in 2008 as a R&D Reservoir Engineer.
Other contributors
Véronique Chassignol: Technical author, final formatting
Benoît Berthoud: Artworks
Kevin Siggery: Reviewer
Joop de Wit: Reviewer
David Wynne: Reviewer
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p3/558
1 – Introduction
OH – OSF – DV
It constitutes the reference notes for many of the KAPPA training courses:
Based on the particular course, only relevant chapters are compiled. To make the course
easy to follow, all slides used during the course are replicated in this document.
Finally, although a generic document, it is part of the technical reference manual for the
KAPPA software suite:
Amethyste for Well Performance Analysis (though we acknowledge that in this first
version on the subject the dedicated chapter is somewhat sparse)
o In this latest version of the DDA book we have added a chapter on Emeraude, the
Production Logging software, that is currently a stand-alone product.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p4/558
This book is not designed to teach software functionality. Guided sessions, online videos and
online help exist for that purpose. Here we focus on the methodology involved in the analysis
of dynamic data. However we shamelessly refer to our software and exclusively use KAPPA
screen dumps. This version is synchronized with Ecrin v4.12.
Actually, let us put things in the right order. At KAPPA we do not promote methods because
they are integrated in our software products, it is the other way around. We implement in our
products what we think is right at a given time, and this is what we then show here.
The challenge of this book then was similar to that of Ecrin: to write something that covers a
wide range of disciplines, whilst avoiding duplication and confusion for those who are only
interested in one subject.
We hope you will enjoy reading it as much as it was a painful process to put it together…
The authors
Sorry to talk about this dull detail but this is a sign of the times. This book is the intellectual
and commercial property of KAPPA. It is available on our WEB site at no cost to registrants.
You are welcome to download it and print it for your own use. If you are in the academic
world, or even if you are a professional instructor, you are welcome to have it printed for your
students. You are also permitted to take any part of it and integrate it into other media on the
condition that the copyright of KAPPA is added and visible. This applies to the copies of the
copies, etc, etc.
You are NOT allowed (and KAPPA reserves the right to take legal action and would):
To use any part of this book in any media format without a clear reference and
acknowledgement to KAPPA.
We have seen in the past that some of our diagrams and figures, available from our software,
on-line help or the support material we deliver with our courses, have been used in other
publications. KAPPA has no objection to this however we do ask, and expect, to be
acknowledged. Since the foundation of the Company KAPPA has regularly and systematically
officially registered its software and supporting diagrams and figures. In addition the figures
we insert in our published documents are bitmap exports of vector (Illustrator™) original
documents that we also keep and register. So we can prove both the chronological and
technical history of our supporting material.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p5/558
Things started to drift when the same tools were applied to other operations: particularly nice
formation test data, unplanned shut-ins recorded by the first permanent measurements, etc.
Welltest interpretation was then rebranded Pressure Transient Analysis (PTA).
Things drifted even more when people started to look at rates and pressures at the much
larger time scale of the well production life. People started entering data in PTA software, but
this was wrong, as models and hypotheses to apply at these time scales were different.
The basic, empirical tools of rate decline analysis were given more substance with ad-hoc
superposition, various loglog plots and models already developed for PTA. We ended up in the
early 2000’s with what is called today Rate Transient Analysis (RTA) or Production Analysis
(PA). Though both terms are valid, in this book we will use the latter.
Then came the ‘Intelligent Fields’. They are not so intelligent yet, but they should become so
by a smart combination of permanent measurements, a federating data model and an
enhanced workflow that would let automatic processes do most of the ground work while
engineers would work by exception on data / cases / wells that require remedial decisions.
Even simulation ended up in the area of dynamic data. Dedicated numerical model could
History Match (HM) the production information and permanent gauge data. In order to be
reasonably fast this would not be done by large simulators, but by numerical models
intermediate between a single tank material balance and full field simulators.
In KAPPA the four elements above were integrated in Ecrin (French word for jewel box),
including Saphir NL (PTA), Topaze NL (PA) and Rubis (HM). PDG data processing was handled
by Diamant Master, a client-server application.
Production Logging (PL) used to be run when something was wrong with the well
production, and Emeraude was developed separately by KAPPA in the mid-1990s. In the past
twenty years this perception has evolved, and PL is now a full scale reservoir tool, especially in
multilayered formations and for complex well geometries.
If we consider that all the above deal with transient data (even very long transients) Well
Performance Analysis (WPA) is somewhat at the junction between the transient and the
steady state (or pseudo-steady state) worlds. Though it generally uses steady-state models it
is applied in transient analysis to correct responses to datum. It shares the IPR’s with the PTA,
and used the same well models and flow correlations as the PL.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p6/558
Dynamic Flow describes any intended or unintended flow process, during exploration or
production operations, where diffusion of fluid takes place within a reservoir. In most cases the
dynamic flow will be human made, by putting one or several wells in production, and/or by
injecting fluids into one or several wells. This however is not strictly required. Drilling and
completing a well crossing several commingled zones may create a crossflow without the well
producing, and this will also qualify as a dynamic flow.
Dynamic Data are the set of measurements of physical properties that are somewhat
impacted by the dynamic flow. This data may be versus depth (e.g. production and
temperature logs), versus time (e.g. usual pressure, temperature and rate data) or both (e.g.
fiber optics data).
Dynamic Data Analysis is simply the process of interpreting and modeling the dynamic data.
As introduced before, this includes Pressure Transient Analysis (PTA), Production Analysis (PA),
History Matching (HM), Well Performance Analysis (WPA), Production Logging (PL). In the
figure below we have even added a specific item for Formation Testers, even though it is not
yet the focus of a specific KAPPA product.
The previous version of the KAPPA book was called Dynamic Flow Analysis. When we released
it we believed this term would become the industry standard. It was the name of a specific SPE
ATW in Kota Kinabalu in 2004: “From Well Testing to Dynamic Flow Analysis”.
However the term Dynamic Flow Analysis was not embraced in the industry and had somewhat
become a KAPPA brand (thanks to the good success of this book), which is absolutely not what
we had in mind. Also DFA was an acronym already used in the industry for Formation Tests.
As the industry accepted term now seems to be ‘Dynamic Data’ we have decided to eat our hat
and rename the technique as per the industry standards.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p7/558
The schematics below show the space scales of the different components of Dynamic Data
Analysis. Areas are naturally not exclusive, and there are overlaps:
- Formation Testers (FT, yellow) share some diagnostic and modeling tools with PTA. It
brings a vertical understanding beyond what standard PTA can do. The diffusion allows FT
to see beyond the direct surrounding of the well. However the connection between the
duration of the test and the investigation area does not match, despite some opposite
claims, what a standard welltest will laterally see.
- PTA (blue disk) brings information in an intermediate area around the well (the infamous
radius of investigation). Deconvolution on several shut-ins provides information beyond
the investigation zone of individual build-ups (blue circle).
- Though it can use multi-well models and diagnostics, PA (orange) typically gives long term
information on individual wells in their respective drainage areas.
- HM (red) will do the same in multiple-well mode, exclusively using a numerical model
- PL (green) provides detailed information inside the well in front of the producing intervals
- WPA (purple) models the well productivity, its evolution in time and sensitivity to
completion options. It also provides a means to correct pressure and rates to datum.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p8/558
When PTA was still called Well Test Interpretation, analyses were performed on data acquired
during dedicated operations called well tests.
A typical well test set-up is shown in the figure below. Temporary equipment is installed
downhole and at surface, the well is put on production under a predefined program and the
diagnostic is performed, generally on a shut-in period after a clean-up phase, an initial shut-in
and a stable production phase during which producing rates are measured at the separator.
The data absolutely required to perform a PTA are the rates, the pressures (preferably
downhole), the fluid PVT and a few additional parameters (well radius, pay zone, etc) required
to switch from a qualitative to a quantitative analysis.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p9/558
The main flow regime of interest is the Infinite Acting Radial Flow, or IARF, which occurs after
well effects have faded and before boundaries are detected. IARF may not always be seen.
IARF provides an average reservoir permeability around the well, the well productivity (skin).
When the well is shut in we also get an estimate of the reservoir static pressure (p* or pi). The
first PTA methods were specialized plots (MDH, Horner) introduced in the 1950’s to identify
and quantify IARF. Other specialized plots dedicated to other flow regimes followed through.
In the 1970’s loglog type-curve matching techniques were developed to complement straight
lines. One would plot the pressure response on a loglog scale on tracing paper and slide it over
pre-printed type-curves until a match is obtained. The choice of the type-curve and the
relative position of the data (the match point) provided physical results. These methods were
of poor resolution until the Bourdet derivative was introduced.
In 1983, the Bourdet derivative, i.e. the slope of the semilog plot displayed on the loglog plot,
increased the resolution and reliability of a new generation of type-curves.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p10/558
Type-curve techniques turned obsolete in the mid-1980s with the development of PC based
software and the direct generation of more and more sophisticated analytical models, taking
into account the complete pressure and flow rate history.
The core diagnostic tool remained the Bourdet derivative. Solutions were no longer unique,
and the engineer was challenged to search for the most consistent answer by considering
information available from all sources. The match of the model on the real data governed the
validity of these analysis, while straight-line methods were made redundant.
Beyond superposition and the use of more sophisticated models, PC based software allowed
nonlinear regression to improve results by history matching the data with the models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p11/558
Despite the spread of more and more complex analytical models and various pseudopressures
developed to turn nonlinear problems into pseudolinear problems accessible to these analytical
models, there was a point where analytical models would not be able to handle the complexity
of some geometries and the high nonlinearity of some diffusion processes. The 1990’s saw the
development of the first numerical models dedicated to well testing, though the spread of
usage of such models only took place in the early 2000’s.
One would expect that most developments to come will be technology related, with more
powerful processing, higher graphics and higher amount of data available. For sure we are also
going to see in the years to come expert systems that will be able to approach the capacity of
human engineers, in a more convincing way than the work done on the subject in the 1990’s.
However the surprise is that we may also see fundamental methodology improvements. The
‘surprise’ of the past decade has been the successful publication of a deconvolution method
which, at last (but we caveats) can be useful to combine several build-ups in a virtual, long
and clean production response. Other surprises may be coming…
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p12/558
Fig. 1.D.1 – Arnold plot (1919) Fig. 1.D.2 – Cutler plot (1924)
Things improved marginally with Arps in the 1940’s, with the formulation of constant pressure
exponential, hyperbolic and harmonic decline responses:
The first loglog, well test style type-curves came with Fetkovich in the 1970’s, still assuming
constant flowing pressure at a time where the well test community was moving towards
superposition / convolution of the flow rates.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p13/558
Superposition and derivative came ten years later, with the work of Blasingame et al., and a
new presentation was proposed, with normalized rate pressure instead of normalized pressure
rate values, and an adjusted time scale (material balance time):
At this stage, PA’s theory had caught up with PTA, but with no effect on day-to-day PA which
remained, until very recently, constrained to ‘old’ tools implemented at the periphery of
production databases. Producing pressures would be overlooked, hence missing the
opportunity to account for superposition. When forward-thinking people wanted to use both
pressure and rates, they would do this in a PTA software. However this was wrong, as
assumptions made in PTA are not necessarily valid over the time scale of the well production.
The move to modern PA commercial software is recent. One key factor was the spread of
permanent surface and downhole pressure gauges, making real analysis using both production
and pressure data. Commercial PA packages such as KAPPA’s Topaze, were then released.
The complementarity with PTA was also the key to the recent success of modern PA.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p14/558
Conversely, the number of data points needed for an analysis remains small:
Low frequency data for production analysis and history matching. If rates are acquired
daily, a pressure point per hour will do. This means less than 100,000 points for ten years.
High frequency data or Pressure Transient Analysis. Assuming 100 build-ups with 1,000
points extracted on a logarithmic time scale for analysis, coincidentally this is another,
albeit different, 100,000 points.
Even for huge data sets, 200,000 points are plenty to cope with the required processing, i.e.
two orders of magnitude less than the median size of the actual raw data set. Unlike the size
of the raw data, 200,000 points is well within the processing capability of today’s PC.
In addition we generally have high frequently, high quality pressure data. We cannot say the
same for rates, which are generally of poor quality. In most cases individual well productions
are not measure but allocated using simplistic methods. A successful automatic processing of
the pressures also requires a successful automatic identification of events of interests and a
correction of the allocated rates.
Though such processing cannot qualify as ‘analysis’, it is an essential element to define our
ability to process permanent data. This will be presented in a dedicated chapter of this book.
Fig. 1.E.1 – Filter window Fig. 1.E.2 – Pressures from PDG and
for 200,000 PDG data points rate data ready for PA and PTA analyses
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p15/558
Fig. 1.F.1 – Formation tester loglog plot Fig. 1.F.2 – Formation tester history plot
Once installed in the well, the cable may be interrogated at any time using nanosecond bursts
of laser light. As it travels down the optical fiber the light interacts with the structure of the
fiber causing the emission of short pulses of light that bounce back along the cable to the
source. This is then analyzed by the surface instrumentation to determine the temperature, or
other parameters, at the point of origin of the ‘bounce back’. The continuous monitoring of the
returning light allows the construction of a continuous temperature profile along the producing
zones. This can then be used to assess the layer contributions, water and/or gas breakthrough,
monitor the effectiveness of gas lift systems and help in detecting problems with the optimum
productivity potential of the well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p16/558
Chapter 2 – Theory
Includes the hypotheses, equations and solutions at the root of the models and methods
implemented in PTA and PA.
Chapter 5 to 8 – wellbore (5), well (6), reservoir (7) and boundary (8) models
These chapters are an itemized review of models used in both PTA and PA. We follow the logic
implemented in the definition of the analytical models used in the Ecrin modules, PTA (Saphir)
and PA (Topaze). We split models by category, progressing away from the wellhead, moving
from wellbore, to well, reservoir and finally boundaries.
Chapter 9 – PVT
Describes the PVT models and flow correlations used by both PTA and PA.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p17/558
Although we know that people with a solely academic background may miss the point if they
do not understand the practical world, likewise it makes little sense for wholly practical people
to have no understanding of the theoretical background.
Engineers with an excellent practical experience but no theory may be making mistakes of
which they are blissfully and completely unaware.
Knowing the theory means knowing the assumptions, and knowing the assumptions means
knowing the limitations of the theory.
There is one thing for sure: PTA and PA are not just about knowing how to operate the
software. That’s the easy bit. The software is just a tool. It is the Engineer who makes the
interpretation.
So we have decided to provide two levels of reading in this book, and we leave you to choose:
The main, white sections will only show the practical side of things, the qualitative
explanations, and the behavior equations, i.e. those you would have to apply for, say, a
specialized analysis using your own spreadsheet. It will also show basic equations, but not
the details of derivation of these equations. You will be spared disappearing into
dimensionless and Laplace space (Captain).
If you would like, at least once in your life, to follow the process of the derivation of the
diffusion equation, line source solution, finite radius well, closed systems, gas
pseudopressures, the yellow sections are just for you.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 1 – Introduction - p18/558
1. ‘Advances in Well Test Analysis’: SPE Monograph Vol. 5 by Robert C. Earlougher, Jr. in
1977: This is a monument. Get one even if the content is a bit outdated (no Bourdet
Derivative). It will remain for a long time a mine of references.
2. ‘Transient Well Testing’: SPE Monograph Vol. 23, published in 2009 though it was supposed
to get out in 1990. Ten different authors (including OH), 850 pages that are sometimes
pretty heterogeneous, but still a must-have.
3. ‘Modern Well Test Analysis, a computer-aided approach’ by Roland Horne, Petroway Inc.
1995: The first book which really integrated the current methodology based on the Bourdet
derivative. Still today a very good complement.
4. ‘Well Test Analysis: The use of advanced interpretation models’, by Dominique Bourdet,
Elsevier 2002: Three good reasons to get it: (1) with the Bourdet derivative, Dominique
certainly made the single most important contribution to modern Pressure Transient
Analysis and, indeed, to Production Analysis; (2) Dominique started KAPPA with Olivier
Houzé in 1987, and he left in 1991; (3) his book is very close to our philosophy. It now
lacks Production Analysis, deconvolution and numerical models, but it is still a must-have.
For those from the dark side who love equations and wish to understand everything:
1. ‘Conduction of Heat in Solids’, Carslaw & Jaeger: A mother lode from where the petroleum
industry has been mining many ideas. Equations need adaptation, but they are all there.
2. ‘Handbook of Mathematical Functions’, Abramowitz & Stegun: All you need to navigate your
starship through Laplace Space and Bessel Functions. A must-have if you write code.
3. Raj Raghavan: ‘Well Test Analysis’, Prentice Hall 1993: Raj just loves equations. Do not try
to count. He must have been paid by the Integral and this is, actually, the strength of this
book. An excellent set of theoretical references.
There are other very good books, apology for not referencing them all…
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p19/558
2 – Theory
OH – OSF – DV
In this experiment, Henry Darcy established a linear relation between the horizontal flow rate
across a section of porous medium and the pressure gradient across the same section.
Using today’s conventions, in Field Units, the relation discovered by the Darcy’s experiment
can be written:
p Q
887.2
L kA
Its differential form can be given in linear coordinates for linear flow and in cylindrical
coordinates for radial flow. For radial coordinates the flow rate is assumed to be positive for a
producing well, i.e. flow towards the well:
p q
Darcy’s law in linear coordinates, in the x direction: 887.2 x
x kx A
p q
Darcy’s law in radial coordinates: r 141.2
r kh
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p20/558
Darcy’s Law states that the pressure drop between two points, close enough to consider all
parameters to be constant, will be:
Darcy’s Law is a fundamental law in dynamic data analysis. It is used to model the flow in
several compartments in the reservoir:
At any point of the reservoir, it is one of the three equations that will be used to define the
diffusion equation (see next section).
When the well is flowing, it determines the pressure gradients at the sandface.
Close to reservoir boundaries it determines that the pressure gradient towards a no-flow
boundary is flat, or it allows the determination of the inflow from the pressure gradient.
Darcy’s law assumes a linear relation between the flow of fluid in one direction and the
pressure gradient, corrected for gravity, in the same direction. This assumes that the density
of flow is small enough to avoid turbulent behaviour.
When there is turbulence, a quadratic term is added and Darcy’s law is replaced by the
Forscheimer’s equation. We then speak about non-Darcy flow. In most cases, non-Darcy
problems will be solved with a numerical model.
There may be as many diffusivity equations as there are assumptions on what is happening
downhole. The basic theory in Dynamic Data Analysis uses the simplest possible diffusivity
equation, assuming the following:
Under these conditions, the diffusivity equation is derived from the combination of:
Some more complex versions of the diffusivity equation will have different components:
Darcy’s law may be replaced by the Forscheimer’s equation, and more complex PVT models
may be used: real gas diffusion, multiphase black oil correlations or an Equation of state.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p21/558
The diffusion equation is derived from the combination of three elementary equations:
The law of conservation of mass: this is a ‘common sense’ law that states that nothing is
ever created or lost; things just move or transform themselves (Antoine Lavoisier, France,
1785). The formulation for an elementary piece of rock is that ‘mass in’ minus ‘mass out’ =
‘accumulation’.
We consider the flow in the x direction of a fluid through a (small) area A, between x and x+x
and between time t and t+t. Considering that 1 bbl/day = 0.23394 ft 3/hr, we get the
following equation
k x A p
Darcy’s law in the x direction: qx
887.2 x
k x A p
So we get: 0.23394 A
x 887.2 x t
p
This simplifies to: 0.0002637k x
t x x
It is from the equation above that we will start when dealing with real gas. Now we are going
to focus on slightly compressible fluids. The first term can be developed as:
p p 1 1 p
First term:
t p t p p t p p t
k x p 1 1 p
New differential form: 0.0002637
x x p p t
The two terms between brackets in the second member of the equation are the formation
compressibility and the fluid compressibility:
1
Formation compressibility: cf
p
1
Fluid compressibility: c fluid
p
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p22/558
p kx p
0.0002637
c f c fluid x x
New differential form:
t
The last equation required is PVT related, in order to assess the relation between the fluid
compressibility and the pressure. The simplest equation that can be used is the slightly
compressible fluid, assuming that the fluid compressibility is constant, i.e. independent of
the pressure. So we get a total constant compressibility:
p 2 p
Also we can consider that:
x x x 2
We finally get the diffusion equation in the x direction:
p k 2 p
Diffusion equation; direction x: 0.0002637 x
t ct x 2
The process above was dealing with the flux in only one direction. If we consider now the flux
through an arbitrarily small cube in all three directions, we get:
2 p 2 p 2 p
x x 2
z 2
k k k
p y 2
y z
t ct t
The operator on the right end side is called the Laplace operator, or Laplacian. It is also written
∆p, but we will avoid this form in order not to cause confusion with the pressure change in
time, also noted ∆p.
p k
General form: 0.0002637 2 p
t ct
p k 1 p
Radial flow: 0.0002637 r
t ct r r r
p k 2 p
Linear flow: 0.0002637
t ct x 2
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p23/558
Physical meaning: If we look at the last relation and consider the flow in only one direction:
If the curvature of the pressure profile is positive, the pressure will locally increase. If the
curvature is negative the pressure will decrease. The speed of the pressure change, in
whatever direction, will be proportional to this curvature.
If the permeability is large, the pressure change will be fast. Basically, the more permeable
the formation, the quicker the formation fluid will react to a local pressure disturbance.
If the viscosity is large, the pressure change will be slow. Basically, the more viscous the
fluid, the slower the formation fluid will react to a local pressure disturbance.
The ratio k/, to which the speed of reaction is proportional, is also called the mobility.
If the porosity is large, the pressure change will be small, and therefore, at a given time,
relatively slow. Basically, the more porous the formation, the lower the pressure change
that will be required to produce / receive the same mass of fluid.
If the total compressibility is large, the pressure change will be small, and therefore, at a
given time, slow. Basically, the more compressible the formation is, the lower the pressure
change required to produce / receive the same mass of fluid.
The term 1/ct, to which the amplitude of the reaction is proportional, is also called the
storativity.
Mobility and productivity seems to play the same role in the diffusion equation. However
the mobility will also be found in the inner boundary conditions and this is where the role of
the different parameters will diverge.
Generally, the diffusion equation used in most analytical models is the radial formulation. The
reference point of such a radial solution is the producing or injecting well. This system of
coordinates is well suited to model radial flow to and from the well. In the following we will
stay in radial coordinates.
The diffusion equation shown above describes the flow of what we call a homogeneous
reservoir where this same, unique equation will be applied everywhere in the reservoir. More
complex reservoirs can be modelled, and this diffusion equation will be replaced by
formulations involving different pressures at the same point (double-porosity, double-
permeability reservoirs) or changing diffusivity and mobility in different locations (composite
reservoirs). The different reservoir models are detailed in the chapter on the subject.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p24/558
In order to define a problem completely we need to know a starting point, i.e. the initial state
of the system, and how the fluid flow will be constrained at the well and at the natural limits of
the reservoir.
In the case of a multilayer analysis, if the reservoir layers are commingled there could be one
such equation for each independent layer. If the pressure difference between the different
layers is not the static gradient in the wellbore, it means that cross-flow will occur between the
different layers as soon as the well is perforated, even if it is not flowing.
It is also possible to start a problem with a dynamic pressure situation. This will be, typically,
the 'restart' of a simulator run from which one wants to run a test on a new, additional well.
This kind of situation is however only possible when using a numerical model.
The simplest realistic model is a vertical well, of radius rw, fully penetrating the formation. The
inner condition is nothing more than Darcy’s law calculated at the sandface. In the case of a
p qB
homogeneous reservoir this will write: Finite radius well:
r r 141.2 kh
rw ,t
As the flow rate q is generally given at standard conditions, the volumetric downhole rate qsf is
calculated by multiplying the standard rate by the reservoir volume factor B. In the simplest
cases this volume factor is considered constant. Otherwise a PVT equation must be used to
dynamically calculate the volume factor.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p25/558
An unrealistic but very convenient model of well condition is the line source, corresponding to
the same Darcy’s law but for the limit case of a well of radius zero.
p qB
Line source well: lim r 141.2
r r0,t kh
The line source problem is interesting because it is easier to solve, faster to run and the
calculation of this solution at r=rw is a good approximation of the finite radius solution and for
any practical purpose it is exactly the same solution within the formation, when r>r w. It is the
solution of choice to simulate interference tests and image wells for boundary effects (see the
chapter on boundary models).
Other inner boundary conditions correspond to Darcy’s law applied to more complex well
geometries (fractures, limited entry, horizontal, slanted, multilateral, etc), and are detailed in
the chapter on well models.
In most cases, the production of the well will be affected by wellbore effects, generally
modelled using what we call the wellbore storage. This notion is developed in the Wellbore
chapter.
In these cases there is a time delay in the production / shut-in, and the well condition also
includes a wellbore component. The simplest model is a constant wellbore storage applied to a
well opened and shut in at surface. The modified equation, for a finite radius well, with
wellbore storage:
p
141.2qB 24C
t
p
r r
rw , t kh
The simplest condition to model analytically is that there is no boundary at all, i.e. that the
reservoir is of infinite extent in all directions. The equation for such system is:
Infinite reservoir models are unlikely to be used in production analysis, where such a
hypothesis is unlikely to be met over the extended time range of such analysis.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p26/558
2.C.1 Derivation
The complete real problem is described with the four following equations:
p k 1 p
Homogeneous radial diffusion: 0.0002637 r
t ct r r r
Uniform initial pressure: pt 0, r pi
p qB
Line source well: lim r 141.2
r r0,t kh
The problem is simplified by introducing dimensionless variables that will integrate all other
parameters and end up with a unique set of equations that we will solve analytically once and
for all. These dimensionless parameters are not only useful to solve the problem. They have a
historic importance, as they were at the origin of the method of type-curve matching (see the
‘old stuff’ in the PTA Chapter ). Though it does not look like it, there are not so many ways to
simplify the complete set of equations. The dimensionless parameters are defined as:
r
Dimensionless radius: rD
rw
kt
Dimensionless time: t D 0.0002637
ct rw2
Dimensionless pressure: pD
kh
pi p
141.2qB
Please note that it is not strictly necessary to introduce the dimensionless radius to solve the
problem. However we will approximate the solution at the well by taking the line source
solution at r=rw. In addition, these dimensionless terms with this definition of r D will be used to
exactly solve the finite radius problem, later in this chapter. Injecting the dimensionless terms
in the physical problem brings us to the dimensionless problem:
p D 1 p D
Homogeneous radial diffusion: rD
t D rD rD rD
Infinite reservoir:
lim p D rD , t D 0
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p27/558
p
Line source well: lim rD D 1
rD rD 0,t D
We will now focus on the dimensionless diffusion equation. As for any diffusion problem, the
equation can be further simplified by moving the problem into Fourier space (Joseph Fourier,
France, 1822) or Laplace space (Pierre-Simon de Laplace, France, 1820). We will use here the
Laplace transform, under which the diffusion equation, combined with the initial condition
becomes:
Laplace Transform: p D u, rD p tD D , rD exp ut D dt D
t D 0
1 p D
Diffusion Equation in Laplace space: u pD rD
r
rD rD D
This equation is known to be a modified Bessel equation (Wilhem Bessel, Germany, 1824), the
generic form of the solution uses modified Bessel functions:
K0 and I0 are the modified Bessel functions of order zero. The unknown functions A and B are
taken from the inner and outer boundary conditions. This gives:
1
From the inner boundary condition: A(u )
u
p D u, rD
1
Line source solution in Laplace space: K 0 (rD u )
u
The real dimensionless solution is the inverse Laplace transform of this function. Generally, the
inverse Laplace transform is numerically obtained using the Stehfest algorithm (Harald
Stehfest, Germany, 1970). In this particular case, and this is the interest of the Line Source
Solution, we know the inverse transform. It is called the Exponential Integral function and it
can be written:
rD2
Exponential Integral solution: p D rD , t D Ei
4 t D
Before we return to the physical world, let us notice an interesting property of the Exponential
Integral: for small negative arguments, it has a logarithmic approximation that will be the
basis of ‘Infinite Acting Radial Flow’. It can be written:
1 tD
for t D 100rD
2
p D rD , t D ln 2 0.80907
2 rD
We now just have to replace dimensionless parameters by their real values to get the physical
solution.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p28/558
70.6qB 948.1ct r 2
Line Source Solution: pr, t pi Ei
kh
kt
A typical line source response is displayed in the figures below, on a loglog scale (with the
Bourdet derivative) and a semilog scale. For first readers, the notions of loglog and semilog
plots are described in the PTA methodology chapter.
Fig. 2.C.1 – Line source loglog plot Fig. 2.C.2 – Line source semilog plot
The line source equation shows that the pressure change is a unique function of the parameter
group r2/t, or more conveniently r/√t. This has an important implication on the physical
understanding of the diffusion process. If one considers, for example, the time it requires for
the pressure change to reach a certain value (for example 1 psi), it will amount to a certain
value of r/√t that we will calculate from the Line Source Solution, and therefore the relation:
In the chapter on boundary models we recommend at one does not use the notion of radius of
investigation, as it may be misleading when one considers how such result is used afterwards.
However, it is an interesting means of understanding the diffusion process. To extend this
notion to cases where the flow geometry may not be strictly radial, we may consider that the
area of investigation is proportional to r², and we therefore have:
If the flow is not strictly horizontal and/or the thickness h is not constant, we will use:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p29/558
Although the following is valid at any point in the reservoir, we will now focus on the well
response, that will be taken at r=rw. As demonstrated in the previous section, there is a value
of time above which the Line Source Solution reaches a ‘regime’ where one can use an
approximation of the highest interest to the well test interpretation engineer. It is the semilog
approximation and this regime is called Infinite Acting Radial Flow, or IARF.
IARF is characterized by linearity between the pressure change and the logarithm of time. This
is why we also call this the semilog approximation. The slope of the response allows the
calculation of the permeability-thickness, kh.
But before we develop further the IARF, we are going to introduce two other effects commonly
accounted for in Pressure Transient Analysis: Wellbore storage and Skin effect.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p30/558
Let us take the case of a well opened and shut in at surface. When you open the well the initial
surface production will be coming from the decompression of the fluid trapped in the wellbore.
In the initial seconds or minutes of the flow the sandface will not even ‘know’ that the well is
opened and the sandface rate will remain virtually zero. Naturally, at some stage we get to a
mass equilibrium, i.e. the sandface mass rate reaches the surface mass rate. This is the time
of the end of the wellbore storage. Conversely, if the well is shut in at surface, the surface rate
will go immediately to zero while the sandface does not know about it. The time of wellbore
storage is this transition time between the effective shut-in time and the time at which the
reservoir stops flowing into the well.
There are two main types of wellbore storage. The first one is modelled by the compression or
decompression of the wellbore fluid in the wellbore volume. This is expressed as:
The second type of wellbore storage is linked to the rise of the liquid level present in the
wellbore. A simplified version is expressed as:
A
Wellbore storage from liquid level: C 144
Where A is the flow area at the liquid interface, is the fluid density.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p31/558
The relation between the surface and the sandface rate is then given by:
p
Wellbore storage equation: qsf qB 24C
t
The simplest wellbore storage model implies that C is constant. However this is not always the
case, and the various models of storage are described in the Wellbore chapter.
2.D.2 Skin
The skin effect quantifies the difference between the productivity of a well in an ideal case and
its effective productivity in reality:
If, after drilling, completion, cementing and perforating, the pressure drop for a given
production into the wellbore is identical to the one you would forecast in the ideal case for
the same geometry, the skin is zero.
Very often, the reservoir near the wellbore had been invaded and the effective permeability
around the well is lowered, thus a higher pressure drop results for a given production. The
skin is then positive.
Conversely, a stimulated well will have better productivity, hence a lower pressure drop for
a given production. The skin is then considered negative.
Skin may not be constant in time. During the initial ‘clean-up’ period in a well test, skin has
a tendency to reduce. Conversely, over long period of times, completed wells may get
damaged reducing productivity, hence an increasing skin.
We will consider that a well has a constant skin when the additional pressure drop, or ∆p skin, is
proportional to the sandface rate. The skin S is a dimensionless factor representative of a
pressure change, and integrates the same coefficients as the one in Darcy’s law:
qsf
Constant skin S: pSkin prw , t pwf t 141.2 S
kh
Where p is the pressure in the formation, at a given time, at distance rw, i.e. just beyond the
sandface, while pwf, at a given time, is the well flowing pressure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p32/558
A way to model a positive skin effect is to consider an equivalent composite system, with
an invaded zone, or skin damage zone of radius r ws larger than rw and permeability ks lower
than k.
Darcy’s law gives the relation between the equivalent skin factor, r s and ks:
k r
Skin from a radial composite equivalent: S 1 ln ws
kS rw
Another way to model skin is the notion of equivalent radius, applicable to both positive and
negative skins. The idea is to consider that the well with skin has the same productivity as a
larger or smaller well with no skin. If the skin is positive, the equivalent wellbore radius will be
smaller than rw. If the skin is negative, the equivalent wellbore radius will be larger than rw.
The equation, again, straight from Darcy’s law, can also be found at the limits when the
permeability k S above tends to infinity (open hole), and will be given by:
r
Equivalent wellbore radius: S ln we or rwe rwe S
rw
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p33/558
We now consider the case of a homogeneous, infinite reservoir produced by a vertical well with
constant wellbore storage and constant skin.
2.D.3 Derivation
We now have a slightly more complex problem to solve:
p k 1 p
Homogeneous radial diffusion: 0.0002637 r
t ct r r r
p qsf
Finite radius well: r
r 141.2
rw ,t kh
pwf qsf
Wellbore storage & skin: qsf qB 24C p pwf 141.2 S
t kh
We will now repeat the definition of the dimensionless terms and add dimensionless sandface
rate and dimensionless wellbore storage. The Skin, being dimensionless, does not need any
conversion:
rD
r
t D 0.0002637
kt
pD
kh
pi p
rw ct rw2 141.2qB
qsf 0.8936C
Dimensionless sandface rate & storage: qD CD
qB hct rw2
p D 1 p D
Homogeneous radial diffusion: rD
t D rD rD rD
p D
Finite radius well: rD r 1
D rD 1,t D
p D dpwfD
Wellbore storage & skin: rD r qD pwfD pD qD S qD 1 C D
D rD 1,t D dt D
The solution process will be the same as the line source problem and will not be detailed here.
For the finite radius problem (without storage, without skin), the general form is like the line
source:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p34/558
As the line source, the infinite reservoir conditions arrives at B(u)=0. The only difference is
that we now calculate the well equation at r D=1 and not rD=0. This gets us to a different value
for A(u):
1
A(u )
From the inner boundary condition:
u u K1 u
K 0 x
If we define the function: K 01 x
xK 1 x
We obtain a modified Bessel equation and solve for inner and outer boundaries; a term K 1
appears corresponding to the finite radius well condition, and we finally get the following
results:
1 S upFRD (1, u)
Adding wellbore storage & skin: pwfD (u)
u 1 uC D S upFRD (1, u)
pwfD (u )
1
S K 01 u
In the homogeneous infinite case:
u 1 uC D S K 0 u
1
The problem is then solved in real space by taking the inverse Laplace transform using
Stehfest numerical algorithm. Though the solution is more complex, the difference between
this solution and the Exponential Integral solution will stabilize when wellbore storage effects
vanish, the residual difference remaining is the skin factor. This links to the IARF by:
1
After wellbore storage: pwfD (t D ) Ei S
4t D
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p35/558
2.D.4 Behavior
The two figures below show the drawdown response of a vertical well with wellbore storage
and skin in a homogeneous infinite reservoir. The plot on the left shows the response on
semilog scale, with the pressure as a function of log(t). The plot on the right shows the loglog
plot of log(p) and derivative vs log(t).
The derivative is developed in the chapter on ‘PTA – General methodology’. The derivative
shown on the right is the absolute value of the slope of the semilog plot on the left. When the
model on the left becomes a straight line, the derivative stabilizes horizontally. The level of
this stabilization is the slope of the model on the left. As all responses end up parallel on the
left plot, then all derivatives merge to the same level on the right plot.
Fig. 2.D.5 – Finite radius solution, Fig. 2.D.6 – Finite radius solution,
semilog scale loglog scale
At early time, the flow is governed by wellbore storage and there is a linear relation between
pressure change and elapsed time:
pt pi
qB
Early time pure wellbore storage: t
24C
At late time, Infinite Acting Radial Flow (IARF) is reached, and there is a linear relation
between the pressure change and the logarithm of the elapsed time:
162.6qB k
IARF Equation: pt pi log t log
2
3.228 0.8686 S
kh ct rw
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p36/558
p
No-flow boundary section: n 0
In this section we focus on closed systems constituted by only no-flow sections, as we want to
introduce the notion of pseudo-steady state (PSS), the main regime of interest in Production
Analysis (PA).
The simplest case of a closed system (because it is the simplest to model analytically) is a
reservoir of circular shape centered at the well and of radius re. Such model may be used to
simulate the behavior of a really closed system or the production of a well in its drainage area:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p37/558
p k 1 p
Homogeneous radial diffusion: 0.0002637 r
t ct r r r
p qsf
Finite radius well: r
r 141.2
rw ,t kh
p
Boundary equation, closed circle:
r 0
re ,t
Dimensionless parameters are exactly the same as for the infinite finite radius solution and will
not be repeated here. We end up with the following dimensionless problem:
p D 1 p D
Homogeneous radial diffusion: rD
t D rD rD rD
p D
Initial pressure & finite radius well: p D t D 0, rD 0 rD r 1
D rD 1,t D
p D
Closed circle OR Ct pressure circle: r 0 OR pD t D , reD 0
D reD ,t
As for the line source problem we move the diffusion equation into Laplace space and get the
following generic solution using modified Bessel functions:
K0 ( u ) I0 ( u )
u u K1 ( u ) u u I1 ( u )
For a closed circle: p FRD (u )
K (r u ) I1 ( u ) I ( r u ) K1 ( u )
1 1 eD 1 1 eD
I 1 ( reD u ) K1 ( u ) K1 ( reD u ) I 1 ( u )
K0 ( u ) I0 ( u )
u u K1 ( u ) u u I1 ( u )
For a ct pressure circle: p FRD (u )
K (r u ) I1 ( u ) I ( r u ) K1 ( u )
1 1 eD 1 1 eD
I 1 ( reD u ) K1 ( u ) K1 ( reD u ) I 1 ( u )
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p38/558
To the finite radius solution we can add wellbore storage and skin in Laplace space:
1 S up FRD (u )
Adding wellbore storage & skin: pwfD (u )
u 1 uC D S upFRD (u )
The problem is once again transferred to real space using the Stehfest algorithm. The behavior
is studied in the next section. As for IARF we will use an interesting late time approximation:
pwfD t D
2t D 3
Late time approximation: 2
ln reD S
reD 4
This will be the origin of the pseudo-steady state behavior. Converting the dimensionless
radius back dimensionless variables to physical variables we get the PSS equation:
2rw2 1 A
PSS equation: pwfD t D t D ln 0.4045 S
A 2 C A rw2
CA is the Dietz shape factor, characteristic of the shape of the reservoir and the position of the
well in the reservoir. In the case of a well at the center of a closed circle we get CA=31.62.
qB re 3
pt pi 0.03723
qB
Circular PSS equation: t 141.2 S ln
ct hre2
kh rw 4
We see that the slope of this linear approximation is inversely proportional to the square of the
circle radius. More generally the slope will be inversely proportional to the reservoir area.
In the case where the reservoir thickness is not the constant h the slope will be inversely
proportional to the volume of the reservoir (or the drainage area). In other words quantifying
the PSS slope will provide an estimate of the reservoir volume and therefore the reserves.
The figures below show the drawdown response of a vertical well with wellbore storage and
skin in a homogeneous reservoir and under three boundary configurations: an infinite reservoir
(green), a circular reservoir of 6,000 ft of radius (blue) and an even smaller reservoir with a
radius of 3,000 ft (red). Responses are shown on a linear scale (left) and a loglog scale (right)
together with the Bourdet derivative defined in the PTA chapter.
During production, when the boundary is detected, the behavior deviates from Infinite Acting
Radial Flow to reach Pseudo-Steady State according to the equation above. PSS is
characterized by linearity between the pressure change and the elapsed time on the linear
scale.
On the loglog plot PSS is characterized by a unit slope of the Bourdet derivative. Though it is
slower, the pressure change also tends towards merging with the pressure derivative on the
same unit slope.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p39/558
Fig. 2.E.2 – Finite radius PSS solution, Fig. 2.E.3 – Finite radius PSS solution,
linear scale loglog scale
Drawing a straight line on the linear plot will show that the slope of the ‘re 3,000 ft’ line is
twice larger than the slope of the ‘re 6,000 ft’ line.
Looking at the time of deviation from IARF on the loglog plot, we see that this occurs 4
times earlier for the 3,000 ft case than for the 6,000 ft case.
In both cases we see the inverse proportionality of the slope and the time of divergence to the
square of the reservoir radius or, more generally to the reservoir area.
The circular model is only one example of closed systems, and Pseudo-Steady State exists for
all closed systems. It is a regime encountered after a certain time of constant rate production.
Although it is a flow regime of interest for Pressure Transient Analysis, it is THE flow regime of
interest in Production Analysis. Pseudo-Steady State may sometimes not only be seen as a
result of reaching the outer boundary of the reservoir; in a producing field, for example, there
is a time when equilibrium between the production of all the wells may be reached, and, for
each well, Pseudo-Steady State will occur within the well drainage area.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p40/558
For any problem involving a linear diffusion equation, the main superposition principles are:
Linear combinations of solutions honoring the diffusion equation also honor this equation.
At any flux point (well, boundary), the flux resulting from the linear combination of
solutions will be the same linear combination of the corresponding fluxes.
If a linear combination of solutions honors the diffusion equation and the different flux and
boundary conditions at any time, then it is THE solution of the problem.
From these principles it is easy to build the following series of rules for superposition in time:
The pressure change due to the production q of a given system is q times the unit rate
solution of the same system. This extends to injections using a negative rate.
To simulate the sequence of a constant rate q1 from time zero to time t1, followed by the
production q2 from time t1 to infinity, you can superpose the production at rate q 1 from
time zero to infinity and a production of rate (q 2- q1) from time t1 to infinity.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p41/558
During the initial production phase, the pressure will be given by:
The pressure change during the build-up is the superposition of the pressure change due to
the production of q from time 0 and an injection of q starting at time tp. This will be written:
The initial pressure may not be known at this stage. We generally start from the pressure
change observed during the shut-in, i.e. starting from the last flowing pressure before shut-in:
And we get the build-up superposition relationship: The pressure change during a build-up, i.e.
the difference between the current pressure and the last flowing pressure, can be calculated as
the simple superposition of elementary drawdown solutions:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p42/558
We calculate the pressure at any time t during the period of flow q n. In the diagram below, we
have qn=0. For Pressure Transient Analysis we often deal with shut-in pressures after more or
less complex production histories. However the following equations are valid for any rates.
If the period of interest is a producing period, or multi-rate drawdown, the pressure change of
interest will correspond to the above formula. In case of a shut-in after a multi-rate
production, the interpretation engineer will, as for a simple build-up, consider the pressure
change since the well was shut-in.
We have shown how multirate solutions can be derived from elementary drawdown solutions.
Horner time, superposition time, etc, will be introduced in the PTA methodology chapter.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p43/558
In a way similar to the superposition in time introduced in the previous section, superposition
in space tells us that if two or more solutions honor the diffusion equation, for example two
line sources coming from different wells, any linear combination of these solutions will honor
the diffusion equation. The resulting inner and outer boundary conditions will be the same
linear combination of the conditions generated by the individual solutions.
Using this principle, the method of image wells consists of superposing individual mirrored
infinite solutions to create a pseudo-boundary. In the case of a single sealing fault of infinite
extent, we will add two pressure changes: (1) the pressure change due to the well response in
an infinite reservoir, and (2) the interference solution of an image well, symmetric to the well
with respect to the fault.
The sum of these two pressure changes, at any point of our half plane reservoir, will honor the
diffusion equation (superposition in space). Because the response will be symmetric with
respect to the fault, the continuous pressure gradient orthogonal to the fault will have to be
zero, and the no flow boundary condition will be honored. All other conditions, initial and well,
being honored, we have built the exact solution to our problem.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p44/558
Strictly speaking, the interference solution added to the infinite solution is not always a line
source. It has to be the exact interference solution from the image well. The time delay due to
wellbore storage may need to be taken into account. In the case of a fracture with a sealing
fault this will involve an image fracture properly positioned. In most cases the difference
between the proper interference solution and a line source will be negligible, but not always.
Note: one could argue that, when the number of image wells is limited (sealing faults,
intersecting faults for some angles) the sum of the image solutions constitute the exact
analytical model. Conversely, when the sum is infinite the resulting solution could be classified
as a semi-analytical solution. This is correct, but still solutions using image wells are an
important category of models that we wanted to single out.
Integral solutions are generally computationally slower than closed form solutions, and the
difficulty is to re-formulate the integrals, optimize the components and use extrapolation
whenever feasible, in order to reduce the calculation time to a level acceptable to the end user.
This is especially so in the case of nonlinear regression. As an example, the equation below is
the solution in Laplace space for a horizontal well in a closed circle.
L~h ~
1s 1 I 0 ~rD u K s reD u d
q~ Lh
pD u
~ K r
0 D u
I s reD u
2khDu
nz nzw h ~
L
h Lh
2 cos cos ~ Fn d
n 1 h
where Fn K 0 ~ rD n
1 I 0 rD n K s reD n
s 1 ~
I s reD n
Application: linear composite, leaky faults, combinations of linear and radial geometries.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p45/558
This solution has the advantage of focusing on the boundary only, and solves a 3-D problem
with a 2-D solution, or a 2-D problem with a 1-D solution.
However there are two major shortcomings: (1) each block delimited by a boundary must be
homogeneous, and (2) it involves the inversion of a densely populated matrix, and this can be
very computer intensive.
Because of these two shortcomings, numerical models are currently preferred and more
developed in the industry.
Numerical models address the two major limitations of the modelling methods presented
above: they can model complex geometries, and they can model nonlinear diffusion problems
for which the superposition in time and space of analytical models will not work.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p46/558
In the process, we may be tempted to ignore the impact of the known parameters and focus
on unknown parameters. Worse, engineers with improper training may use software defaults
for ‘known’ parameters without considering the impact on the results.
This section is a summarized guide to the influence of all parameters involved in the diffusion
process, whether we input or calculate them.
You may wish to reproduce the whole process in the chapter below by running successive test
designs in Saphir and compare the results using the multi-gauge or multi model option.
We simulate the simplest model that can reproduce the three following flow regimes: Wellbore
Storage and Skin, Infinite Acting Radial Flow and Pseudo-Steady State. We model a vertical
well in a circular homogeneous reservoir centered at the well:
Input parameters: rw=0.3 ft; h=100 ft; =10%; ct=1.e-5 psi-1; =1 cp
Flow rate history: q=1,000 stb/day with B=1; 1,000 hours of flow; 1,000 hours of shut-in
Interpretation ‘results’: pi=5,000 psi; C=0.01 bbl/psi; Skin=0; k=10 mD; r e=1,500 ft
The result of such design, on both production and buildup, is shown below. The loglog plot
shows both drawdown and buildup periods.
Fig. 2.H.1 – Test Design of the reference case: history and loglog plots
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p47/558
The early time hump of the derivative is the transition between pure wellbore storage
(early time unit slope; the magnitude of the hump is the skin).
Infinite Acting Radial Flow (IARF) corresponds to the stabilization of the Bourdet derivative.
At late time, the production (drawdown) period shows a unit slope corresponding to
Pseudo-Steady State, whilst during the shut-in the pressure stabilizes at the average
pressure and the derivative takes a dive (also see the chapter on ‘Boundary models’).
In the following we are focusing on the production (drawdown) and study the effect caused by
changes in both ‘known’ and ‘unknown’ parameters.
In this design we know ALL parameters. In the following, however, we will make a split
between ‘unknown’ and ‘known’ parameters. ‘Unknown’ parameters are those for which we
generally look for in a Pressure Transient or Production Analysis.
In the case of the simulated example, they are the wellbore storage coefficient, the skin factor,
the reservoir permeability and the radius of the circular boundary.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p48/558
The value of C has a major effect, which is actually exaggerated by the logarithmic time scale.
You can see on the linear history plot that all responses seem to be the same, however.
Infinite Acting Radial Flow: When the influence of wellbore storage is over all responses merge
together, both in terms of pressure and derivative. Wellbore storage does not play any role
except that it masks infinite acting radial flow on a time that is proportional to the value of C.
PSS: Wellbore storage does not affect the late time response.
Fig. 2.H.4 – Effect of wellbore storage, semilog and history plot plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p49/558
2.H.2.b Skin
Below is shown the response with all default parameters and a variable skin. Values for skin
are -3, 0, 2, 5 and 10.
Storage: Skin does not change the position of the early time unit slope (pure wellbore storage)
but affects the amplitude of the hump. A larger skin will produce a larger hump, hence
delaying the time at which Infinite Acting Radial Flow is reached.
IARF: Once IARF is reached, the skin has no effect on the vertical position of the derivative,
but has a cumulative effect on the amplitude of the pressure.
PSS: Skin does not have an effect on the time at which PSS is reached or on the derivative
response at the end. However the cumulative effect on the pressure remains and all responses
‘bend’ and remain parallel when PSS is reached (see history plot below).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p50/558
2.H.2.c Permeability
The figure below presents the response with all default parameters except the permeability.
Values for k are 2, 5, 10, 20 and 50 mD.
Storage and IARF: The derivative responses have the same shape but they are translated
along the wellbore storage line of unit slope. When the permeability is higher, the reservoir
reacts faster and deviates earlier from pure wellbore storage. The level of stabilization of the
derivative, i.e. the slope of the semilog plot, is inversely proportional to k. For this reason the
responses diverge on the semilog plot, the different slopes being inversely proportional to k.
PSS: At late time all derivative signals merge to a single unit slope. This is linked to the fact
that permeability has no effect on the material balance equation.
Fig. 2.H.8 – Influence of the reservoir permeability, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p51/558
The time at which PSS occurs depends on re. As the governing group in the diffusion equation
is t/r², when re is multiplied by 2 the time at which PSS occurs is multiplied by 4. When PSS
occurs the slope of the pressure response, on the history plot, is inversely proportional to the
volume of the reservoir, and therefore inversely proportional to r e².
In the shut-in the pressure stabilizes to the average pressure obtained by simple material
balance. Unlike the cases above, the pressures, for a given production, do not stabilize at the
same value. Again the depletion (pi–pav) is inversely proportional to the reservoir pore volume,
i.e. inversely proportional to re².
Fig. 2.H.10 – Effect of the reservoir size, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p52/558
The effect of a change in the wellbore radius is strictly the same as the consequence of a skin
change: Early time amplitude of the derivative hump, no middle time and late time effect on
the derivative, but a shift in the pressure that stays constant once wellbore storage effects are
over. The equivalence between wellbore radius and skin is hardly a surprise, as skin can also
be defined with respect to an equivalent wellbore radius. The well response is in fact a function
of the equivalent wellbore radius rwe = rw.e-Skin.
Fig. 2.H.12 – Effect of wellbore radius rw, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p53/558
2.H.3.b Porosity
The figure below presents the response with all default parameters but varying the porosity.
Values for are 3%, 10% and 30%.
Storage and IARF: Porosity behaves like the skin or the well radius. A smaller porosity
produces a higher hump on the derivative but does not change the derivative IARF level. The
equivalence between porosity and skin is used in two different areas. In interference tests the
skin has a marginal influence, and the pressure amplitude is used to assess the porosity.
Hydrogeology: Hydrogeology will assess a value of skin (generally zero) and use the absolute
value of the pressure change to assess the Storativity S, i.e. the porosity.
For a given reservoir size, the time for PSS is proportional to . Underestimating the porosity
by 10% will provide an overestimation of the reservoir bulk volume of 10%, and therefore an
overestimation of the boundary distance. The total pore volume will remain correct.
Fig. 2.H.14 – Effect of the reservoir porosity, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p54/558
Fig. 2.H.16 – Effect of the total compressibility, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p55/558
2.H.3.d Viscosity
The next figure illustrates the response with default parameters but varying the fluid viscosity.
Values for are 0.2, 0.5, 1, 2 and 5 cp. If we compare the response with the Fig. 2.H.8
illustrating the effect of a permeability change (above), we see that the sensitivity to viscosity
is exactly opposite to the sensitivity to permeability. At early time (Storage) and middle time
(IARF), the derivative responses have the same shape but translated along the wellbore
storage line of unit slope. When the viscosity is lower, the reservoir reacts faster and deviates
earlier from pure wellbore storage. The levels of stabilization of the derivative and the semilog
slopes are proportional to . At late time all derivative signals merge to a single unit slope. In
other words, the sensitivity on 1/ is the same as the sensitivity to k on all parts of the
response. This means that we have another governing group with k/ also called the mobility.
Fig. 2.H.18 – Effect of the fluid viscosity, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p56/558
2.H.3.e Thickness
Illustrated below is the response with all default parameters constant and a varying net
drained thickness. Values for h are 20, 50, 100, 200 and 500 ft.
Storage and IARF: Changing the thickness has a similar effect to changing the permeability
and an effect opposite to changing the viscosity. In other words, the governing group that
defines the early time response, apart from wellbore storage and skin, is kh/.
PSS: Unlike permeability and viscosity, the reservoir thickness also has an effect on the late
time material balance calculation. Also, the time at which the derivative deviates from IARF
towards PSS does not change, and therefore the influence of the thickness on the position of
the PSS straight line is similar to the sensitivity to the reservoir porosity or the compressibility.
Fig. 2.H.20 – Effect of the reservoir thickness, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p57/558
Fig. 2.H.22 below illustrates the response with all default parameters constant, but with a
variable rate for each simulation. Values for qB are 600, 800, 1000, 1200 and 1400 rb/d.
The result of varying qB corresponds to a straight multiplication of the pressure change from
pi. The loglog response is shifted vertically, and the semilog and history plots are vertically
compressed or expanded, the fixed point being the initial pressure.
Fig. 2.H.22 – Effect of the rate– qB, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p58/558
2.H.4 Conclusions
2.H.4.a on this series of designs
Beyond the arbitrary split between input and output parameters set in the methodology of
Pressure Transient Analysis, we see that several groups of parameters govern the response of
the well / reservoir system.
Pure wellbore storage: The absolute position of the early time unit slope is only a function
of the wellbore storage C.
Transition from pure wellbore storage to IARF: The shape of the hump, which originally was
set to CDe2.Skin when dealing with type curves, is actually a function of C and r we-Skin, and is
also slightly affected by and ct. The whole curve is translated along the unit slope as a
function of k, and h, the governing group being kh/.
IARF: The governing group is kh/ defining the semilog slope, hence the level of the
derivative stabilization when IARF is reached.
There is a residual effect of rw, skin, and ct that defines the constant term of the IARF
straight line. Generally the Skin factor is calculated from this constant term.
At late time the parameters that govern the position of the PSS unit slope are r e, , ct and
h. The governing group is .ct.h.re². You may actually prefer the group 2.re².h..ct, and
you get Vpore.ct, where Vpore is the reservoir pore volume. What we arrived at here is
nothing more than material balance.
There is a residual effect of all factors that affected the transient diffusion before PSS was
reached, and which constitute the constant term of the PSS straight line.
162.6qB k
pt pi logt log 3.228 0.8686.Skin
2
kh ct rw
…and if we re-shuffle it a bit, we get:
162.6qB kh
pt logt log loghct 3.228 2 log rw e Skin
kh
One can see that the slope is a function of kh/, and that there is a constant term that shows,
among other things, the residual effect of rwe-Skin, and ct.
qB 1 A
pt pi 0.234
qB
t 141.2 ln 0.4045 Skin
2
ct hA kh 2 C A rw
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p59/558
…and if we re-shuffle it a bit again, without worrying about calculating a few constants,
including the value of the shape factor for a circle, we get:
1 kh
pt a1qB t f rw e Skin , , re
V porect
One can see that the slope is a function of the porous volume times the compressibility, while
the constant term is a function of all the transients that happened before PSS occurred.
What we will do now is consider that the truth is our reference case. From this reference case
we will change one of the ‘known’ parameters to a wrong value, off by, say, 10%. Then
optimize the model by varying the ‘unknown’ parameters to match the reference case and see
how this affects these parameters (the results from Pressure Transient or Production
Analyses).
Fig. 2.H.23 – h at 25% below ‘true’ – Left plot is the model using all other ‘exact’ unknown
Right plot is the match after optimization of the unknown parameters
kh is OK – k is 33% above ‘true’ – re is 15% above ‘true’ – Skin at -0.14 instead of 0
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p60/558
The action here is fairly straightforward, so we will spare you the figures and only show you
the results:
Well radius: If we over-estimate rw by 10%, the calculated skin will be over-estimated with
a shift of 0.1. The general rule is that the false skin induced by this incorrect r w will be in
the ratio (rwwrong/rwright). All other parameters will be OK.
Porosity: If we over-estimate by 10%, k and kh will be OK, there will be a marginal error
on the skin, the reservoir size will be underestimated by 10%, and hence the boundary
distances will be underestimated by 5%.
Rate: If we over-estimate q.B by 10%, C, k.h and k will be overestimated by 10%. The
reservoir size will also be overestimated by 10%, and hence the boundary distances will be
overestimated by 5%. There will also be a minor skin difference.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p61/558
In some cases we can try to keep using the principle of superposition by substituting the
pressure, and in some cases the time, with a function of the pressure that makes the equation
a bit more linear. This is the case for the nonlinear diffusion of dry gas, where we will use what
is called pseudopressure and pseudotime.
We have considered so far that the reservoir fluid was slightly compressible and that viscosity
and compressibility were constant. This assumption is not valid for gases.
The gas viscosity is also a function of pressure and temperature. The Z factor and viscosity
may be input from tables at a given temperature or we may use correlations. The correlations
for Z factor (Beggs & Brill, Dranchuk et al, Hall and Yarborough) and for viscosity (Lee et al,
Carr et al) can also be matched using flash reference data. From the Z factor table /
correlation, one can then calculate the volume factor and compressibility correlations.
The two figures below give a typical relation between Z and as a function of p at a given T:
Volume factor and compressibility are then calculated from the Z factor:
ZpscT 1 dBg 1 1 dZ
Volume factor and compressibility: Bg cg
pTsc Bg dp p Z dp
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p62/558
We could just use these PVT elements straight into a numerical simulator. However we would
be short of diagnostic plots if we were just blindly simulating and matching.
The idea is to rearrange the equations by changing variables in a gas diffusion equation that
looks like the slightly compressible fluid equation. If we do so we could extend the
methodology and diagnostic plots developed for the linear process to the nonlinear process of
gas diffusion. This is done by introducing pseudopressures.
p
Raw diffusion equation: 0.0002637k x
x x t
m
The real gas law gives: PV ZnRT where n
M
m M p
This gives the gas density:
V RT Z
M p p M p
Diffusion equation becomes: 0.0002637k x
x RT Z x t RT Z
p p p
Which simplifies to: 0.0002637 k
t Z x Z x
x
p p 1 Z p p
The first term develops as:
t Z Z p p p Z T t
1 RTZ Mp Z p
The gas compressibility is written: cg
p T Mp p RTZ T p p Z T
p p p
So:
p
Z
p
c c c
t t Z t
f g t
Z
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p63/558
p p p p
And the diffusion equation becomes: ct 0.0002637k x
Z t x Z x
p p k p p
Or: 0.0002637 x
Z t ct x Z x
We add the viscosity on both sides of the equation:
p p k p p
Gas diffusion equation: 0.0002637 x
Z t ct x Z x
We introduce the pseudopressure in order to eliminate the additional terms in the gas diffusion
equation (there is an historic factor 2 in this equation):
p
p
Gas pseudopressure: m( p ) 2 dp
0
Z
m( p) m( p ) p 2 p p
Taking the partial differential:
t p t Z t
m( p ) m( p ) p 2 p p
And, the same way:
x p x Z x
m( p) k x 2 m( p )
The gas diffusion equation becomes: 0.0002637
t ct x 2
m( p ) k
Extending to x, y, z directions: 0.0002637 2 m( p )
t ct
We now arrive at the same formulation as the case of a slightly compressible fluid.
m p 2
p
Gas pseudopressure: dp
0
Z
The Field unit for pseudopressures is psi²/cp. A typical pseudopressure response as a function
of pressure, and for a given temperature, is shown below. There is a rule of thumb regarding
the behavior of this function:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p64/558
3E+9
2E+9
1E+9
0
0 5000
m( p ) k
Gas diffusion equation: 0.0002637 2 m( p )
t ct
The principle of traditional real gas analysis is to replace the pressure by the pseudopressure
and interpret the data as if the fluid was slightly compressible.
However, there is an important shortcoming in this method. Although the equation looks the
same, it will only be linear as long as we can consider the diffusion term k/ct constant. This
is valid as long as the average reservoir pressure does not substantially decrease which is a
reasonable assumption for a standard, relatively short, well test. In the case of extended tests
such as limit tests and production well tests, this may not be the case.
m p
Normalized pseudopressures: mNorm p pref
m pref
The normalized pseudopressures have the dimension and unit of the pressure, it follows the
same diffusion equation and it coincides with the pressure at pref:
mNorm( p ) k
Normalized pseudopressures: 0.0002637 2 mNorm( p )
t ct
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 2 – Theory - p65/558
P
Darcy’s law expressed in terms of speed, in SI units: u
x k
P
Forchheimer’s equation, same references and units: u u2
x k
is called the turbulence factor. There are two main options to address non-Darcy flow:
The first is to focus on the impact of non-Darcy flow on the well productivity. This is what
was done historically using rate dependent skin. Normal diffusion is used, but an additional,
rate related skin component is added.
The other way is to model non-Darcy flow by numerically integrating the Forchheimer
equation in the model.
The application of these two approaches is detailed in the chapter ‘PTA General methodology’
and in ‘Production Analysis – The case of dry gas’.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p67/558
3.A Introduction
The following sections present the classical and modern methodology and tools developed for
Pressure Transient Analysis (PTA). We will present the typical path recommended to perform
an analysis and to design a test. The final sections are dedicated to operational considerations
on well testing i.e. what data to gather and its validation
Twenty years ago PTA was still named ‘Well Test Interpretation’, as only well test operations
could provide such data. In the last years well tests have been complemented and partially
replaced by formation tests and any shut-in monitored by permanent downhole gauges, hence
the new, more generic name.
Today’s definition of PTA is less about the operation that produces the data than the
processing applied to them. In PTA we gather pressures and rates, preferably downhole, and
we focus on one or several periods of interest, generally shutins (buildup or falloff) to perform
a diagnostic. The diagnostic leads to the choice of a model and this model is then used to
simulate pressure values to be matched on the data. The model is then tried against a larger
portion of the recorded data, if available. Part of the process involves matching the model
parameters on the data by trying to achieve the best possible match, either by trial and error
or using nonlinear regression.
The two major analysis methods described in this book are Pressure Transient Analysis (PTA)
and Production Analysis (PA). Several key items differentiate the processing of PTA from PA:
PTA is preferably on clean, shutin data, while PA uses the production data.
As a consequence PTA data are cleaner, with a high capacity for diagnostics compared to
PA, where one will typically match clouds of simulated and data points.
PTA is performed over a relatively short period, hours / days / weeks rather than the
months and years analysed in PA.
In PTA we analyse pressure data using the rates as the corrective function. In PA it tends
to be the opposite, where the input is the flowing pressure and the output is the rates and
the cumulative production.
In the next two sections we will make a clear distinction between the traditional tools (the ‘old
stuff’) and modern tools (the ‘right stuff’). Traditional tools had their use in the past, and they
are still in use today, but have become largely redundant in performing analysis with today’s
software. Modern tools are at the core of today’s (2010) modern methodology, and they are of
course on the default path of the processing.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p68/558
Specialized plots correspond to a selected scale where some flow regime of interest such as
infinite acting radial flow, linear flow, bilinear flow, spherical flow or pseudo-steady state
are characterized by a straight line. The slope and the intercept of this straight line will
generally give two parameters of the system.
Type-curve matching consists in sliding a plot of the data, generally on a loglog scale, on
pre-printed type-curves. The relative position between the data and the type-curve, also
called the time match and pressure match, provides two quantitative parameters. The
choice of type-curve will give additional information.
We will start with the semilog plots, the main specialized plots used to quantify the main flow
regime in PTA: Infinite Acting Radial Flow, or IARF.
162.6q k
p log t log 3.228 0.8686S
2
kh ct rw
In the case of more complex well geometries and reservoir heterogeneities, the constant term
may be more complicated, as it will integrate the cumulative effect of these geometries and
heterogeneities. Still the response will have the same shape. The value of skin S calculated
from the equation above may not be the right value in terms of well damage according to
Darcy’s law, but it will have some meaning. It is called the ‘equivalent skin’.
The Miller-Dyes-Hutchinson (MDH) plot is a graph of the pressure or the pressure change as a
function of the logarithm of time. IARF is characterized by a linearity of the response.
Drawing a straight line through these points gives a slope and an intercept:
162.6qB
IARF straight line: Y logt b m logt b
kh
Where: b pLINE log( t ) 0 pLINE t 1hr
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p69/558
Important: The value of b corresponds to the value of p on the straight line, not on the data.
162.6qB
Permeability: k
mh
p ( t 1hr ) k
Skin factor: S 1.151 LINE log
2
3.228
m ct rw
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p70/558
162.6qB k
IARF approximation: p DD ( X ) log X log 3.228 0.8686S
2
kh ct rw
If t is large enough to reach IARF, so will tp+t. A lot of terms cancel out and we get:
162.6qB t
Build-up superposition: p BU ( t ) log p DD t p
kh t t
p
162.6qB t p t
Rearranging: pBU (t ) log pDD t p 162.6q log t p
kh t t kh
p
If the production time was too short, IARF was not reached during the flow period and ∆p DD(tp)
cannot be turned into a log approximation. The constant term on the right of the equation
becomes unknown. In this case, the analysis described below will give the permeability, but
not the skin factor. If the production time was long enough, then the term t p can also be
turned into a logarithmic approximation and we get:
162.6qB 162.6qB k
IARF at tp: p DD t p logt p log 3.228 0.8686S
2
kh kh ct rw
162.6qB t p t k
So: pBU ( t ) log log 3.228 0.8686S
t p t ct rw
2
kh
t p t
We introduce the Horner Time as:
t p t
Infinite-acting radial flow for a build-up is characterized by linearity between the pressure
response and the logarithm of Horner time. Drawing a straight line through this point gives a
slope and an intercept:
162.6qB t p t t t
IARF straight line: Y log b m log p b
kh t t t t
p p
If the producing time tp was long enough to reach IARF, the IARF approximation for a build-up
will be similar to the drawdown relation, replacing time by Horner time, and will be given by:
162.6qB t p t k
IARF for a build-up: p BU ( t ) log log
2
3.228 0.8686 S
kh t p t ct rw
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p71/558
After transformation, we get the equation of the Horner plot straight line in terms of pressure:
162.6qB t p t
pBU pi log
kh t
162.6qB
Permeability-thickness product: k
mh
p1hr p wf t p 1
Skin factor if tp is large enough: S 1.151 log log k 3.23
t c r 2
m p t w
Note that the time function is such that the data plots ‘backwards’, as when ∆t is small, at the
start of the build-up, the Horner function (log (tp+∆t)/∆t) will be large, and when ∆t tends to
infinite shut-in time the Horner time tends to 1, the log of which is 0.
If the reservoir were truly infinite, the pressure would continue to build-up in infinite-acting
radial flow and eventually intercept the y-axis at pi, the initial pressure. However, as no
reservoir is infinite, the extrapolation of the radial flow line at infinite shut-in time is called p*,
which is simply an extrapolated pressure.
It is important to notice that the calculation of the permeability is valid even in the case of a
short production before the shut-in, while the validity of the calculation of skin is conditioned
to a producing time long enough, so IARF was reached before t p.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p72/558
n 1
qi qi 1
Snt log( tn ti t ) log t
i 1 qn qn1
Calculations of permeability is the same as for the Horner plot, and the skin factor is calculated
by taking one point on the straight line, say (X,Y), and is given by:
Y p wf k n qi qi 1
S 1.151 X log log t n 1 t i 3.23
ct rw i 1 q n q ni
2
m
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p73/558
For a multi-rate production or injection (whether rates are measured at surface or downhole),
the adequate semi-log plot will still have the same superposition time as the X axis, but the
pressure will be replaced by:
Q
pi pt
qt
Where Q is a reference value for normalization, that will be generally chosen to be the last
stabilized rate of the interpreted period so that the function tends to ∆p at late time.
For a multi-rate build-up or fall-off using sandface rates, the Y axis will still be
pt p wf
but the calculation of the superposition time will allow rates to be changed during the period.
Furthermore the reference rate will not be the difference of rates before N and N-1 (this has no
meaning when dealing with continuous downhole rate measurements), but the last stabilized
rate before shut-in.
The type-curve used for wellbore storage and skin in an infinite homogeneous reservoir (in the
following Figure), the pressure match (relative positions in the Y direction) will give the
permeability. The time match (relative positions in the X direction) will give the wellbore
storage, and the selection of the curve will give the skin factor.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p74/558
We have seen in Chapter ‘Theory’ that diffusion problems were solved by replacing the real
variables by dimensionless variables that eliminate other parameters influence in order to
arrive at a set of equations that are solved, hopefully quite simply and once and for all, in
dimensionless space. Two critical variables are the dimensionless time and dimensionless
pressure.
k
Dimensionless time: t D 0.0002637 t At where A f (k , , rw ,...)
ct rw2
kh
Dimensionless pressure: pD p Bp where B g (k , h, ,...)
141.2qB
This is still used today in modern software to generate analytical models. The solution is solved
in dimensionless space, then the variables are converted to real variables and superposition is
applied to the solution, and then matched with the real data.
However, this was not possible, or at least not easy, before personal computers and related
software were available. So the ‘trick’ was to use a simple and remarkable property of the
logarithmic scales, historically used to create slide rules. Taking the logarithm of the equations
above we would get:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p75/558
Fig. 3.B.7 – Square root plot Fig. 3.B.8 – Fourth root plot
Fig. 3.B.9 – One over square root of time plot Fig. 3.B.10 – MDH plot for intersecting faults
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p76/558
In order to evaluate the AOF, the well is tested at multiple rates. The bottom hole pressure is
measured during each drawdown and buildup. These data are plotted on the adequate plot in
order to deduce the leading parameters values for the equations governing the rate and the
stabilized flowing pressure relationship. The most classical plot is ∆m(p).vs q, on a log-log
scale , leading to the C and n parameters values of the Rawlings and Schellhardt equation:
Q C (m( p) m( pwf )) n
The same equation is used to describe the Inflow Performance Relationship and to create the
very useful IPR plot m(pwf) versus q.
The AOF is obtained by extrapolating the deliverability curve to a ∆m(p) value corresponding
to a flowing bottom hole pressure of 14.7 psi.
Cullender and Katz improved later the method in order to reduce the well test duration by
using short and non stabilized flow period data. The various well test types and IPR and AOF
methods are detailed in the chapter on Well Performance Analysis.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p77/558
To speak about ‘right stuff’ vs. ‘old stuff’ is deliberately provocative. Old stuff is not always
wrong. Old stuff was smartly developed to adapt to what we had before modern PC’s: slide
rules, graph paper, basic calculators, programmable calculators, etc. Old stuff has been of
great benefit to the industry and was on the critical path to get the modern tools we have
today.
What is sometimes wrong is to continue using old techniques that are less accurate than more
recent developments. What is definitely wrong is to continue using such tools today, simply out
of inertia, while everybody knows that their limitations are successfully addressed by more
modern techniques. What is near criminal is using the good old stuff to address completely
new problems that were never encountered when this good old stuff was developed.
Until 1983, PTA methodology was a manual process alternating type-curve matching and
specialized analyses. Type-curves without derivative had poor diagnostic capabilities. The
results from specialized plots would help position the data on the type-curve.
For example, drawing the IARF straight line on the Horner plot would give a permeability that
could be used to set the pressure match on the type-curve. Selecting a type-curve would give
the approximate time of start of IARF, which in turn would help define the Horner plot line. For
gas we would complement this with AOF / IPR analyses.
Type-curves with pressure only had very poor resolution on a loglog scale.
Type-curves were generally printed for drawdown responses, and ruined by superposition.
Type-curves were set for a discrete number of parameter values.
Specialized plots required a pure flow regime that may never have been established.
Skin calculation on the Horner plot required that IARF was reached during the drawdown.
Drawing a straight line through the two last pressure points was a common practice.
The process required moving back and forth between at least two or more different plots.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p78/558
To most engineers the replacement of manual techniques by computer based analysis in their
day-to-day work occurred in the 1980s, and came from three major breakthroughs:
Electronic downhole pressure gauges, either run with memory or on electric line, became
cheap and reliable, detecting repeatable behaviors far beyond what the previous generation
of mechanical gauges could offer.
The spread of Personal Computers allowed the development of PC-based pressure transient
analysis software. The first PC based programs appeared in the 1980’s, initially reproducing
the manual methods on a computer. Since then, new generations of tools have been
developed, with modern methodology at its core.
The Bourdet derivative is certainly the single most important breakthrough in the history of
Pressure Transient Analysis. It is still today (2010) the cornerstone of modern technology.
As any breakthrough idea, the principle of the Bourdet derivative is very simple:
The Bourdet Derivative is the slope of the semilog plot displayed on the loglog plot…
… to be more accurate, it is the slope of this semilog plot when the time scale is the natural
log. It has to be multiplied by ln(10)=2.31 when the decimal logarithm is used in the semilog
plot. The semilog plot is not ‘any’ semilog plot (MDH, Horner, etc). To be correct the reference
logarithmic time scale must be the superposition time.
dp dp
For the first drawdown: p' t
d ln t dt
dp
In the more general multirate case, and in particular for shut-ins: p'
d supt
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p79/558
Combined with the early time unit slope during wellbore storage, the derivative provides an
immediate way to define the pressure and the time match on the loglog plot, just by
positioning a unit slope line on the wellbore storage regime and positioning the horizontal line
on the IARF response.
This alone would have made the Bourdet derivative a key diagnostic tool. The delightful
surprise was that the derivative could do much more, and that most well, reservoir and
boundary models carry a specific signature on the derivative response. It is this remarkable
combination that allowed the derivative to become THE diagnostic and matching tool in
Pressure Transient Analysis.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p80/558
Pure wellbore storage effects are only observed at very early time when the well pressure
behavior is dominated by the well fluid decompression or compression.
dCt
The derivative is therefore: p' t Ct p
dt
At early time, when pure wellbore storage is present, pressure and the Bourdet derivative
curves will merge on a unit slope straight line on the loglog plot.
Other early time flow regimes, such as linear and bilinear flow, covered in more detail later,
will exhibit a different and specific behavior for both pressure and the Bourdet derivative.
The simplest and most frequently used analytical model in Pressure Transient Analysis is the
case of a vertical well, with wellbore storage and skin, producing a homogeneous reservoir of
infinite extent. This ‘new’ formulation of the derivative by Bourdet et al. was solving at once
this case, on a single loglog plot, and in a very accurate way:
When plotting the pressure and the Bourdet derivative on a loglog scale, at ‘late time’ the
derivative would stabilize, and the stabilization level would define the type-curve pressure
match (hence the permeability) in a unique way. The only possible movement then would be
left and right to define the time match.
At early time the Pressure and the Bourdet derivative would merge on a single unit slope, that
was also found on the type-curves, hence providing a unique value of this time match, and an
instant calculation of the wellbore storage.
Luckily enough, the shape of the derivative (drawdown) type-curve and the Bourdet derivative
of the data (multirate) was seldom affected by the superposition, unlike the pressure data, so
it was reasonably valid to match the data derivative with the type-curve derivative, hence
getting a unique identifier of the type-curve (generally CDe2S), which in turn would give the
value of Skin.
So, on a single action, a type-curve using the Bourdet derivative would provide the definitive
answer on a single, accurate diagnostic plot.
This was already brilliant, but it turned out that the Bourdet derivative could bring much more
for all type of models, whether by identification of other flow regimes or by the signature that
the Bourdet derivative would carry for such or such model…
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p81/558
We are not going to describe exhaustively the list of flow regimes that can be successfully
identified using the Bourdet derivative. The short answer is: ‘a lot’. The table below shows a
list of the most frequently used flow regimes in PTA, together with the chapter of the DDA
book where this will be covered in more details:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p82/558
The use of the Bourdet derivative does not stop with flow regimes. The Bourdet derivative can
display the signature of numerous well, reservoir and boundary behaviors. Again the loglog
plot below shows some typical behaviors detected from the observation of the Bourdet
derivative.
3.C.8 Modeling
The actual measured data is matched to a full model response selected by the interpreter from
the identified flow regimes, including the appropriate well and boundary models and the actual
production history.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p83/558
Despite the good hard work done on the subject, true deconvolution is fundamentally unstable.
To add stability into the deconvolution process one has to make assumptions that bias the
response. Until recently, when a new paper was published on the subject, the natural question
would not be “does it work?” but rather “where’s the trick?”.
This is no longer the case. A set of publications, initiated by Imperial College and
complemented by bp (e.g. SPE #77688 and SPE #84290), offer a method that actually works,
especially, although not exclusively, to identify boundaries from a series of consecutive build-
ups. There is a trick, and there are caveats, but the trick is acceptable and the method can be
a useful complement to PTA methodology.
Saphir offered on October 2006 the first commercially available version of this method. It was
since improved by adding additional methods and it is now a totally recognized and helpful
method, able to provide additional information, unseen through other techniques.
In the simulated example shown on the plot below, the total production history is 3,200 hours,
and the build-up is only 100 hours long. What we see on the loglog plot corresponds to only
100 hours of diffusion. We use the superposition of the rate history in the derivative
calculation, but this is only to correct superposition effects; we really only see 100 hours of
diffusion.
But there is more than 100 hours of information in the total response. Pressure has been
diffusing for 3,200 hours, some locations far beyond the radius of 100 hours of investigation
have felt the effect of the production and this information has bounced back to the well. So
there is more information somewhere in the data, but not in the build-up alone.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p84/558
The idea behind deconvolution is, in this example, to recreate from the real data, the ideal
pressure response to a constant production over the same period (see below). If such a
response was created, we could extract a loglog response of 3,200 hours duration, and the
virtual data extracted this way would show much more of the reservoir than the 100 hour
build-up could possibly reveal alone.
In other words, if we extract the deconvolved data from the real pressure response without
assumptions, we will be able to get a much longer response, or the same response for a much
shorter test.
Fig. 3.D.3 – Deconvolution: pu(t) Fig. 3.D.4 – Deconvolution: pu(t) & p’u(t)
In theory we may not even need to shut the well in. With deconvolution we would make a
stepwise construction of an ideal constant pressure response and be able to perform a
Pressure Transient Analysis whenever we have some rates and pressures recorded at the same
time. One of the Holy Grails of transient analysis has been found?
f g (t ) 0 f g t d
t
t pu (t ) t
pw (t ) q( ) d q( )p'u (t )d
0 (t ) 0
pu
Where p'u (t )
t
In other words: p q p'u q'pu
Note that, when the rate history is a step function, the calculation of this integral will bring the
‘usual’ superposition equation, seen in Chapter 2:
n
pt pi qi qi 1 pu t ti
i 1
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p85/558
What we know from the test data is the pressure response p(t) and the rate history q(t).
What we are interested in is pu(t), the ideal response for a constant production. The process
of getting a convolution component (pu) from one convolution component (q) and the
convolution product (p) is called deconvolution.
We will not get into the details of the past work on deconvolution. There was what we call
‘blind’ deconvolution, i.e. a stepwise ‘de-superposition’ of the pressure data assuming nothing.
This is an unstable and divergent process. There was also ‘constrained deconvolution’, such as
moving to Laplace space and dividing the Laplace pressure response by the Laplace rate
history. The weak point was generally in the assumptions made. Other deconvolution targets
were more limited, such as getting an early IARF response by deconvolving the early time
build-up with the ‘assumed’ downhole rates corrected by wellbore storage. However all these
attempts had limited success. Deconvolution was considered a nice but unrealistic idea.
However, a new approach has recently been developed by Imperial College and bp. This
approach has generated much enthusiasm, and although it is valid and useful, it is most
important to bear in mind the assumptions and the limitations, as there are, indeed, some. It
is based on a change of variable corresponding to what we really look for, ie the derivative on
a loglog scale.
dp (t ) dp ( )
We define ln( t ) and z ( ) ln u ln u
d ln( t ) d
With this change of variable, the convolution equation becomes:
ln t
p(t ) pi q t e e z ( ) d
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p86/558
dp (t ) dp ( )
ln(t ) and z ( ) ln u ln u
d ln( t ) d
The principle of the new deconvolution method is to find the derivative curve z() which, using
a modified convolution expression, will match the data. The curve z() is defined as a polyline
or a spline. It has an initial point, as (0,0) makes no sense on a loglog scale. Its time range is
the elapsed time between the beginning of the production history and the last data point we
will try to match (3,200 hr in the previous example).
The curve z() is the main unknown. There are two additional sets of optional unknowns: the
first is the initial pressure pi, which may or may not be known. The last unknown is a tolerance
to errors in the rate history, which we need to introduce for the optimization process to
converge.
Naturally, the first and main component of the objective function we want to minimize is the
standard deviation between the convolved model and the pressure data (Fig. 3.D.8 below).
This may be all pressure data or, more likely, a series of time intervals where the pressure
data is considered reliable. Typically, successive build-ups are a good candidate, unless the
producing pressures are exceptionally smooth as in the case of clean gas tests for example.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p87/558
The second component of the objective function is the total curvature of the derivative
response. When several derivative responses provide an equivalent match, the idea is ‘the
simpler the better’. Among the candidate responses, the one with the smallest total curvature
is selected. In other words, if several variations give a good fit, the one that transits smoothly
between the points is preferred to those that oscillate.
The third and last component of objective function is the modification in the rate values
required to obtain a match. Again, for the same quality of match, the solution that requires the
smallest changes in the rates is selected.
Fig. 3.D.9 – 2nd objective function Fig. 3.D.10 – 3rd objective function
minimize total curvature minimize rate correction
Because we need to account for some uncertainties on the rates and for better behavior of the
optimization algorithm, we allow the rates to change a bit, but we try to make this change as
small as possible.
We want the resulting derivative curve to have the same kind of signature as the various
analytical and numerical models we use. To achieve this we make sure that the total curvature
of the response is reasonable, i.e. the resulting curve is not too twisted.
Once we get our deconvolved derivative, we integrate to get the pressure response and show
both pressure and derivative on a loglog plot. As this is the theoretical response for a constant
rate, we match the deconvolved data with drawdown models, not superposed models.
Because this deconvolved response is not data, but ‘only’ the result of an optimization that
may be imperfect, we keep an eye on the real data by superposing the model and looking at
the history match on the real, not the deconvolved signal.
The green and red build-ups are strictly telling exactly ‘the same story’, while the blue build-up
is consistent at late time but diverges at early time, with apparently a different wellbore
storage and skin.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p88/558
Fig. 3.D.11 – Production and pressure history Fig. 3.D.12 – Normalized loglog plot
If we apply the von Schroeter et al. method on the two coherent build-ups (green and red) the
process is successful and the resulting deconvolution is shown in the figure below. The late
derivative dip in both build-ups does not correspond to a pressure support but the beginning of
a closed system. The late time unit slope in the deconvolution derivative will provide a fair
estimate of the well drainage area. Because we had two build-ups the deconvolution process
could also back calculate a consistent value for the initial pressure.
If we calculate the convolution with no constraint from the three build-ups, or even using any
of the two first build-ups and the last, we get an erratic deconvolved signal (see figure below).
Coming back to Earth, we should remember that deconvolution is, after all, only a nonlinear
regression process.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p89/558
Any engineer that has been using nonlinear regression on analytical models knows how this
process is impressive when it succeeds and pathetic when it fails. It is the same here. The
inconsistencies at early time have forced the spline to oscillate to the best, actually least bad,
fit for the early time of the three build-ups. This had a residual effect at late time and the
spline had to oscillate again to get back on track. It is frustrating, because we do not really
care about the early time, and the late time of the three build-ups were telling more or less the
same thing. But the process has failed, because it is just a brainless numerical process that is
not able to incorporate this early / late time distinction. Hence, the process is doomed, looking
for a global solution that does not exist.
If we want to use deconvolution for, say, volumetrics from PDG data, we are likely to see the
wellbore storage and skin change over time. We face here a major potential limitation of the
deconvolution. It also kills any hope to have the deconvolution become a single button option
that would give the engineer a reliable late time response at no risk.
To be fair to the von Schroeter et al. method, it could give a more stable answer if we
increased the smoothing of the algorithm. This amounts to give more weight to the ‘minimum
curvature’ part of the error function at the expense of the ‘match data’ part. The algorithm
then sacrifices the accuracy of the match in order to get a simpler, smoother derivative curve.
The danger of using smoothing is to hide a failure of the optimization and get a good looking
but absolutely irrelevant deconvolution response. Such irrelevance could be detected when
comparing the reconvolved pressure response to the recorded data on a history plot. However
this problem is one of the main, frustrating shortcomings of the original von Schroeter et al.
method.
The workaround suggested by Levitan is to replace a single deconvolution for all build-ups by
one deconvolution for each build-ups.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p90/558
As deconvolution can be done using pi and any build-up, the idea of the Levitan method is to
perform one deconvolution for each build-up with a common value of initial pressure. The first
guess of pi may produce divergent deconvolution responses. The value of pi is reiteratively
changed until one gets a consistent late time response for all deconvolutions. Because each
deconvolution only honors one build-up data at a time, there will not be any instability at early
time.
This process is easily accessed in Saphir: this Levitan et al. deconvolution method is proposed
when multiple periods are extracted: ‘Separate deconvolutions with a common pi’ (Levitan et
al). The checkbox ‘force Pi to:’ is automatically tagged ‘on’ and pi must be entered manually.
This option calculates automatically one deconvolution per extracted period. The results
working on the 3 build ups is shown below:
The process would be a bit more complicated if we did not have a couple of coherent build-ups
to start with. If we had, say, only Build-up #1 and Build-up #3, we would have to ‘play’ with
the initial pressure until the late time behaviors are coherent and give the same reservoir size.
Attempts at deconvolution with values that are both too low too high for pi are shown in the
plots below.
This becomes a trial-and-error process, until the late time behavior is coherent for the two
build-ups although this just may not be possible. We see on the plots below that the late time
behavior are reasonably coherent but crossing each other.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p91/558
The Levitan method addresses the main limitation of the von Schroeter et al. method. However
two problems remain:
a) It requires a re-iterative process where the initial pressure is assessed and corrected
until the coherence of all convolutions is reached. Attempts to add an optimization loop
on pi have failed so far.
b) A deconvolution using a given build-up and a single additional point (the initial
pressure) will not provide additional, intermediate behaviors that we may detect if we
were running a single optimization on all build-ups. Some information is therefore
potentially lost in the separation process.
When the inconsistency comes from different behavior at early time, a solution is proposed.
To illustrate this we can run a very simple simulation, or test design, of two systems with
different wellbore storages and skins but the same reservoir / boundary model. In this
example we use the simplest homogeneous infinite behavior but it applies to more complex
systems. The loglog plot of both responses is shown below left. Simulation 1 has a constant
wellbore storage. Simulation 2 has a larger, changing wellbore storage and a higher skin value.
The plot below right compares both simulations on a linear scale and displays the difference
between them. During the production phase, the simulations differ with an early time transient
corresponding to the different wellbore effects and the skin. When wellbore effects fade the
difference stabilizes at a level corresponding to PSkin(Skin). When the well is shut-in there is
again a transient behavior, but when the wellbore effect fades the difference stabilizes to zero.
We will call this limit time the convergence time.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p92/558
Fig. 3.D.19 – Two simulations: log log plot Fig. 3.D.20 – Two simulations: linear plot
On the loglog plot, pressures do not merge at late time because we are plotting P(t) = PShut-
in(t) – Pwf(t=0). The PShut-in values converge, and P stabilizes at Pwf1 - Pwf2
The deconvolution is an optimization on P, not P. We will get a stable process, not affected by
early time discrepancies, if we run an optimization on one build-up and the last part of the
other build-ups after convergence.
The Houze et al. method (called ‘Deconvolution on one reference period and the end on the
other periods’ in Saphir), allows specifying which period (blue in our case) will be taken as a
reference and all its data taken into account and the other periods (green and red) will be
taken into account at late time after the convergence time specified by the user.
It gives a single deconvolution matching with the reference period early time and
corresponding to all the periods at late time as shown below:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p93/558
From experience, this combination does not bring much. The problem with the Levitan method
is that each deconvolution will be the simple combination of a given build-up and a single value
of initial pressure. Even if we add one or two log-cycles between the build-up and the
deconvolution, all this addition will be coming from a single pressure value, and therefore
deconvolution will have no chance to pick-up intermediate behaviors.
Compared to the Levitan method, we get the same number of deconvolution responses and
the post-processing is the same. However it has the advantage of using a ‘proven’
convergence time instead of a wildly guessed initial pressure, and there is no reiteration. In
complement, each deconvolution uses positive information from all build-ups, and it is possible
to pick intermediate behaviors that are beyond the reach of the Levitan method.
There is no guaranty that the late behavior of all deconvolution curves will be the same. This
will be the case only if the material balance information is consistent. If not, it will point out
that the deconvolution process may not be valid, either because of poor data or because the
hypotheses behind this deconvolution are not valid
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p94/558
When the deconvolution is run with too high a value of pi (red dot), there is an additional
depletion compared to the infinite case. In order for the deconvolution to honor both the build-
up and the initial pressure, it will have to exhibit at late time an increase of the derivative
level, typical of a sealing boundary. Conversely, if we entered too low a value of pi (blue dot),
there needs to be some kind of support to produce depletion smaller than the infinite case. The
deconvolution will exhibit a dip in the late time derivative.
In other words, deconvolution honors the data of the first 100 hours of the build-up, and then
uses the flexibility of the 400 last hours to ‘bend’ in order to be compatible with the entered
value of pi. Why is the shape so smooth? Because, on top of this, the optimization process
minimizes the total curvature of the derivative curve.
Fig. 3.D.23 – Deconvolution with pi too high (red) or too low (blue)
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p95/558
We had 400 hours of production and 100 hours of shut-in. Deconvolution is attempting to
provide 500 (theoretical) hours of constant production. The first 100 hours of the spline
representation are rigid, because they have to match the build-up. We have a loose tail
between 100 and 500 hours. The regression moves the tail up or down in order to match the
initial pressure pi after superposition, i.e. to match the depletion during the producing phase.
If the initial pressure is higher than the infinite case, the depletion is greater, and the reservoir
has to be bounded. If the initial pressure is lower, the depletion is less and the derivative of
the deconvolution will tail down and exhibit pressure support.
Now, there are hundreds of tail responses that could produce a specific depletion. Which one is
picked? The simplest one, because the regression selects the solution with the lowest
curvature, i.e. the one that will go smoothly from the part of the response that is fixed to
whatever PSS level. The first part of the data is rigid, the last part is more or less set by the
depletion and the transition is as smooth as possible.
Naturally, if we had more intermediate data, like some reliable flowing pressures, this would
‘firm up’ the deconvolution algorithm. The optimization first focuses on matching the data
before trying to get a smooth response.
The periods to analyze must be first selected for extraction, they may be picked interactively:
Having selected more than one build-up, when the deconvolution process is called its dialog
offers the various available methods:
- Looking for a common solution to all the extracted periods (Von Shroeter et al.)
- Separate solutions with a common Pi (Levitan et al.)
- Deconvolution on one period as reference, and the end of the other periods (Houze et
al.)
- Variant 1 : Levitan et al. after Houzé et al.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p96/558
It is possible to superimpose the deconvolved signal and the individual build-up data in a rate
normalized way. The reconvolved responses corresponding to the build-ups can also be
plotted.
This is a very serious issue, and may be one of the main dangers of the deconvolution process.
When we interpret data and choose the simplest model that honors whatever we have, we
know the assumptions we make. When we add a closed system to a model matching individual
build-ups in order to reproduce the depletion, we know that we are taking the simplest model,
but that there may be additional and intermediate boundaries that we may not see. In its
apparent magic, deconvolution presents a long term response that is NOT positive information
but just a best match. So how can we discriminate positive information and information by
default?
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p97/558
One element of answer is to calculate a sensitivity. The deconvolution parameters include the
points of the z() response, the curvature of this response and the rate changes. Looking in
the Jacobian of the deconvolution matrix we can see the sensitivity of the match to the
individual points of the z() response. In Saphir we normalize this and show it as a vertical
sensitivity band for each point of z().
This band does not quantify the real, physical, uncertainty. It only provides a blunt statistical
description of the problem around the solution point and answers the question, “By how much
can I move this node until the match is affected?” If the sensitivity is high, we are close to
positive information and the uncertainty band is narrow. If, on the contrary, moving a point up
and down has no or little effect on the global response, then we know that this section of the
response may not be very relevant. In order to illustrate this we have taken two extreme
examples.
In the above figure we have done what we should never do: calculate a deconvolution with a
single build-up WITHOUT imposing a value of pi. The problem is completely under-defined,
because we can move the tail end of the deconvolution and get a perfect match of the data by
compensating pi. So the tail end is completely irrelevant and it shows on the sensitivity plot.
In the below left figure, we have run the deconvolution with the two build-ups of the original
example. We let pi be calculated by the deconvolution process. As one can see, the sensitivity
is much better, but intermediate behavior could have been different. What made the
deconvolution pick the selective curve was its smoothness. Now, in the right side figure the
same deconvolution was run but the value of pi was fixed. This makes the sensitivity graph
even better.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p98/558
This new deconvolution provides, in one run, a synthesis of information present in different
parts of the pressure / rate history. When you combine two build-ups that are far apart in
time, you do not only check their coherence but you integrate, in the deconvolved response,
the information of depletion that took place between these two build-ups.
There is nothing really that a conscientious and trained engineer could not do before having
this tool, but all this information is calculated in a single run, early in the interpretation
process, instead of being a trial-and-error, last minute analysis following the interpretation of
the individual build-ups.
So the tool is useful and interesting, but we must emphasize its uses and limitations:
Still, it is great to have something new in an area where the last really innovative theoretical
tool was the 1983 Bourdet derivative.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p99/558
Once the interpretation is initialized the first task is to get a coherent and representative set of
rate and pressure data. This includes loading the data, quality check and validation and editing
to prepare for analysis. One or several periods of interests, typically buildups, will then be
extracted and the diagnostic plot created and the data diagnosed. The interpretation engineer
can select one or several candidate analytical and/or numerical models, set their parameters
and generate these models. For candidate models that are retained, the engineer can refine
the parameters, either manually or using nonlinear regression. Once the model parameters are
finalized, the user may assess the sensitivity and/or cross-correlations of the parameters using
confidence intervals from the nonlinear regression and run sensitivity analysis. Finally, a report
is issued.
The path above is the default path when all goes well. In reality, for complex problems, it may
be a trial-and-error process where the interpretation engineer may decide to go backwards
before continuing forward again when a segment of the process is not satisfactory.
3.E.1 Initialization
The interpretation engineer must first input information required to identify the test and select
the main options that will set up the interpretation process: the type of fluid (that will
determine the function of the pressure to be used) and the type of test (standard,
interference). The engineer may start with a standard analysis, nonlinear numerical, shale gas
or coalbed methane (CBM), multilayer analytical or linear numerical, or a formation tester type
of analysis. The final input will be the parameters that are assumed to be known which are
required to calculate the interpretation results: porosity, net drained vertical thickness, well
radius, etc.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p100/558
For a slightly compressible fluid, only a few PVT properties, assumed to be constant, are
needed: formation volume factor, viscosity, and total system compressibility.
For other phase combinations, a choice of correlations or input of PVT tables is required to
calculate the pseudo pressure and pseudo time functions.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p101/558
Fig. 3.E.4 – Loading Data: Define data source Fig. 3.E.5 – Loading Data: Define data format
Unlike for open-hole logs and despite several attempts in the past, as of today (2010) no
industry-standard ASCII format has emerged to execute the load with a click. In Canada the
EUB has published a format (PAS) for compulsory electronic submission of test data and
results, but this remains very oriented towards local legal procedures. So software packages
have to provide the engineer with the flexibility to interactively define the file format, in order
to cover the wide variety of existing files.
Originally, the amount of data was a real issue because of limited available memory running
under DOS, the cost of computer memory, and the fact that the size of the data arrays had to
be declared and set by the software programmers at compilation time. All these limitations
have gone. Today’s software can easily handle multiple gauges and volume of gauge data
acquired during a well test which is sometimes more than a million data points.
The recommended procedure is to load all data acquired from all gauges during the well test,
and not just a filtered subset. Filtering can always been applied later, on duplicate data sets.
However, things are changing with the spread of permanent downhole gauges and the
increased usage of production data in Pressure Transient and Production Analysis. The amount
of production data is one order of magnitude higher, and is much less smooth than a typical
buildup. Smart filters, such as wavelets, are increasingly required to reduce the volume of the
data, retaining any trends and significant changes and at the same time eliminate noise. The
processing of Permanent Downhole Gauge (PDG) data is covered in another chapter.
Validation of the gauges: identification of failures, drift, clock failure, resolution, etc.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p102/558
When two pressure sensors are set at the same depth, as with a dual gauge carrier, their
difference can be used to check their synchronization (time shift) and their coherence. Gauge
failure and gauge drift may be identified.
When the gauges are set at different levels, as in a tandem configuration, any change of the
pressure gradient occurring between the gauges may be detected. In the presence of phase
segregation problems, the proper placement of dual gauges may help qualifying and even
quantifying these problems. The engineer will avoid pointlessly interpreting and using a
complex reservoir model behavior that has absolutely nothing to do with the reservoir.
In the absence of dual gauges, one can calculate the derivative of the gauge versus time, and
plot it on a linear or log-log scale. This will act as a ‘magnifying glass’ of the pressure behavior.
Fig. 3.E.6 – Data Quality control using dual gauges and Drifting gauge
Even though people associate the difficulty of well test interpretation to the modelling part, a
lot of thinking takes place at this stage of the interpretation, as it defines the starting point
from which the diagnostic will take place. Failing to identify any operational problems can
potentially jeopardize the whole interpretation process.
There is a side advantage in performing a comprehensive Quality Control: after going back and
forth between the data and the well test report, the interpretation engineer will know what
happened during the test, even if he/she was not on-site.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p103/558
Beyond the usual cleaning of irrelevant data and the correction of load errors, the main
challenge will be to end up with at least one coherent, synchronized set of rate and pressure
data. To get there the engineer may have to perform the following tasks:
If not already loaded, create the rate history graphically by identifying the pressure breaks
and get the rate values from the well test report. Use a facility to automatically identify the
shutin periods and automatically correct the production history from volumes to rates.
Refine the production history, when the time sampling of rate measurements is too crude.
Conversely, if the production history goes into useless details, simplify the rate history to
reduce the CPU time required to run the models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p104/558
During the extraction process, and possibly later, the engineer may decide to control the
derivative smoothing, apply a logarithmic filter, and in the case of a shut-in, control the last
flowing pressure.
3.E.6 Deconvolution
The new deconvolution method developed in the last few years was presented in a previous
section. Extraction of the deconvolution may occur right after the extraction of the individual
shut-ins, or much later after the interpretation of the individual shut-ins.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p105/558
3.E.7 Diagnostic
After extraction, data problems overlooked in the initial quality control may become apparent,
requiring further data editing, and a new extraction.
Looking at the derivative response will generally be the starting point of this process.
Individual features in the derivative signature will be considered, validated or rejected, and
potentially associated to a well, reservoir or boundary model. These possible assumptions must
then be compared to what is already known from other sources.
Depending on the diagnostic, the loglog and semilog plots can be complemented by other
specialized plots to identify specific flow regimes by straight line analysis. However this
approach has been made largely redundant by the introduction of the modern approach. The
engineer will have the choice of the pressure function, the time function and the type of
superposition that will be applied to the time function; raw function, tandem superposition for
simple buildups, or multirate superposition.
Depending on the prior knowledge and the complexity of the response, the problem may be
very quickly restricted to one or two alternatives, or the range of possibilities may remain
large. For exploration wells, the uncertainty in the explanation may stand, and alternative
explanations may be presented in the ‘final’ report. Further tests and increased knowledge of
the reservoir could allow, later, narrowing down the range of possibilities, months or years
after the initial interpretation.
The objective is to use the modelling capability of the software to match in part or in totality
the pressure response. This will consist of selecting one or several models, which may be
analytical or numerical. Then, entering a first estimate of the model parameters, running the
model and comparing the simulated results with the real data, on all relevant plots.
AI based Model advisers may be used to speed up the process by detecting if a derivative
response can be explained by a certain combination of well, reservoir and boundary models,
and produce a first estimate of the model parameters with no user interaction.
Today’s software products offer a wide choice of analytical models. Typically the user will
select a wellbore, a well, a reservoir and a boundary model. Unfortunately, our ability to solve
problems mathematically is limited, and all combinations of well, reservoir and boundary
models may not be available. This is sometimes frustrating to the engineer, as in this case only
portions of the response can be matched at any one time.
There are many ways to estimate parameters: (1) from the results of specialized plots that
may have been created in the analysis; (2) from straight lines drawn on the loglog plot
(wellbore storage, IARF, fractures, closed systems, etc.); (3) from interactive features picking
the corresponding part of the derivative signature; (4) by manual input.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p106/558
For complex analytical models, only a few parameters, or relationships between parameters,
will be determined in a unique way from the well test response. The other parameters or
missing relations will be input from other sources of information. If this missing information is
not available, the problem will remain under-specified.
The previous remark on parameter estimation is even more critical when using numerical
models, where the geometry will essentially be built from prior knowledge of the reservoir, and
only a few ‘global’ unknowns will be deduced from the test.
It is no longer a technical problem to transfer information and data directly and dynamically
between applications, and there are dozens of public or proprietary protocols to do so (OLE,
COM, DCOM, Corba, etc.). As a consequence models generated from third party applications
may be transferred and run in pressure transient analysis software. The most common
example is a ‘standard’ reservoir simulator run.
The model is generated and compared to the data, in terms of both pressure and Bourdet
derivative on the history plot, the loglog and semilog plots. In case other specialized plots are
used, the model will also be compared on these different scales. At this point, the engineer
may decide to reject the candidate model, or keep it and refine the parameter calculations.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p107/558
Nonlinear regression: The principle is to use numerical optimization to refine the parameter
estimates by minimizing an error function, generally the standard deviation between the
simulated pressures and the real pressures at carefully selected times. The derivative may also
be integrated in the error function. The most commonly used optimization algorithm is
Levenberg-Marquardt, but there are many variants.
Among the model parameters, some may be fixed by the engineer. For the others, the
engineer may control their upper and lower limits. The data points on which the error function
will be calculated may also be user controlled. One major choice will be whether the
optimization is restricted to the analyzed period, or if it is extended to data outside the
analysis. In the first case, the match at the end of the optimization process will be as good as
or better than the starting point. If the optimization is performed on points beyond the
analyzed period, the overall history match will be improved, but potentially at the expense of
the quality of the match on the period used for the diagnostic.
Fig. 3.E.12 – Setting regression points and controlling the optimization bounds
One can also run and display series of model generations corresponding to different values of a
given parameter in order to compare them on a single loglog plot. This is, to a certain extent,
the modern version of the type-curves, where dimensionless drawdown solutions are replaced
by the generation and extraction of detailed models with user preset ranges of parameters.
The figure below shows the sensitivity to the distance between a well and one single sealing
fault.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p108/558
There has never been an industry standard for reporting, except the Canadian EUB format that
is restricted to very basic results. Typically, professional interpretation reports will be
generated with two possible set-ups:
A header document, from a word processor, with some ‘copy-paste’ of plots and results
from the PTA software, but with most of the ‘mechanical’ report delivered as an annex.
An integrated document, typically from a word processor, where some plots and tables are
dynamically connected to the PTA software using some OLE or COM automations. The
advantage of this solution is that it is much more flexible. Once a model template has been
set up the reporting process will get shorter and shorter from one interpretation to the
next.
In any case the engineer must keep in mind that an interpretation is, at best, a best guess at a
given time, and ‘truth’ can evolve with time. The key word here is ‘interpretation’.
This is probably the worst possible statement we can imagine in PTA. The reservoir is very
unlikely to be an exact circle. What we have in PTA is a range of models that we KNOW to be
over-simplified. We simplify to turn the qualitative into quantitative, and one must always be
factual. Also, the number of significant figures of a result must be reasonable, or at least not
ridiculous. It is not because the nonlinear regression finished at a given number that we must
keep all the significant figures of this number. So a much more reasonable statement would
be: ‘If we consider that the late time behavior is linked to a close system, a reasonable match
was obtained with a circular reservoir with a radius of 4,100 ft.’
‘The more I know about well testing, the more I worry’. H.J. Ramey Jr, 1989
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p109/558
In order to run the scenarios the engineer must rely on known information and data or make
reasonable assumptions. To explore ‘what-if’ scenarios, sensitivity studies are carried out and
based upon these the engineer can choose both downhole and surface equipment including,
amongst others, pressure gauges with adequate accuracy and resolution. Surface equipment
must be selected which includes facilities to measure the flowrate of gas, oil and water. This
done, the engineer can make the safest and most economical plan to reach the test objectives.
3.F.1 Safety
Safety is the primary concern in all well testing operations and is the mandatory objective that
must be met before any other. With the safety constraints applied the objectives of the test
must be defined, operational limitations identified and scenarios considered that clearly show if
the objectives will be met and if not, why not. The safety aspects of well testing are covered in
a variety of documents and books available on the internet and elsewhere. All companies have
their own procedures and standards and they should all be fairly common as they have to
conform to rules and regulations set down by the various regulatory bodies overseeing
different parts of the world. This section is just to remind the reader that Hazard and
Operational studies (HAZOPS) for well test operations exist and they must always be applied to
ensure safe practice and operation.
3.F.2 Objectives
It sounds obvious but you must know what you want to achieve. What are the ultimate results
you would like to obtain from the considerable investment involved in running such a test, and
what are the results going to be used for. An assessment of the value of the results is also
necessary to avoid ‘overkill’ and unnecessary expenditures. Below is a non-exhaustive list of
parameters one may want to determine in a well test:
Well model (fractures, partial penetration, slanted and horizontal well parameters)
Reservoir model (double porosity, layer and composite parameters)
Boundary model (distance and type of boundaries)
Permeability
Skin
Heterogeneities (double-porosity, layer and composite parameters)
Static pressure
Layer parameters (properties and pressure)
Layer contributions
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p110/558
As a general rule the higher the sampling rate of a tool, the better it is for pressure transient
analysis. This is certainly the case with respect to downhole pressure and temperature
measurements, but is less critical with the surface flow rate measurements.
The tools must be carefully selected. The downhole gauges must have accuracy adequate for
the expected pressure response. If the differential pressure during a test is expected to be
small (e. g. high permeability and low formation damage) then the accuracy and resolution
becomes an important issue. It is also important to consider if high sampling rate of a tool will
have an adverse effect on its accuracy. How to program a memory gauge or decide to use real
time surface read-out gauge is an issue that needs to be resolved.
If multiphase production is expected in the wellbore or even in the formation then certain
considerations must be addressed. To decrease the wellbore volume, downhole shutin may be
considered to minimize the chances of phase segregation during a buildup and careful
placement of the gauge in the test string is important.
The below figure illustrates the programmed change in pressure acquisition rate just prior to a
buildup. The increase in sampling rate increases the noise.
When the diagnostic plot is affected by noise the recognition of the flow regimes becomes
more difficult and the choice of the correct interpretation model becomes ambiguous. See
following illustration.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p111/558
The behaviour of the period that will be analysed is very dependent upon the previous history
and in particular any disturbances immediately before the period in question. If flow rate
changes or small shutin periods immediately prior to the subject period cannot be avoided, at
least the value of the flow rates should be captured. This is a detail that should be stressed in
any test program.
During a buildup that was designed for analysis purposes avoid as far a safety permits any
surface operations that may jeopardize the usefulness of the period. The example below
illustrates a real case where some surface operations affected the buildup period and rendered
it for all practical purposes useless for any analysis with confidence.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p112/558
There are several different tests mainly depending upon the type of well.
Exploration
DST
Formation testing
Appraisal
Production
Dynamic gradient
Static gradient
The planning of any of these types of tests involves setting a scenario to theoretically evaluate
if the objectives of the tests will be met.
3.F.5 Scenario
In order to set a scenario some of the information is usually known and some are assumed.
The quality and quantity of the known parameters are dependent upon the amount of work
already carried out on a prospect, if the planned test is to be carried out in a known field, if the
well has already been tested at some earlier stage, etc. The below table summarises some of
the data necessary to set the scenario which will be used to theoretically simulate the test
which will be used for the final program.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p113/558
In order to illustrate the concept we will devise a scenario where the main objective of the test
is to determine skin, permeability and the distance to a seismic defined fault at some distance
away from the well (1500 ft).
The analysis of nearby wells has indicated the nature of the reservoir to be homogenous with a
permeability-thickness of some 1000 mDft. The skin is of course unknown but believed to
become positive after drilling in this relatively high permeability rock. PVT parameters are
known in this saturated reservoir, and porosity and net drained thickness have easily been
determined from open hole logs in advance of the planned test.
The well has been on production in excess of a year. The well will be shut in to run two
memory gauges in tandem and set in a nipple 100 ft above top of perforations. The sampling
rate of the gauges is set at a minimum sampling of one second. The below plot illustrates how
a five second sampling rate with a quartz gauge of 0.01 psi resolution and 1 psi noise should
look on a diagnostic loglog plot.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p114/558
The duration of the shutin that permits for the running and setting of the gauges is set to 24
hours to allow the pressure to stabilize. Then a one single constant production step of 1000
stb/d for 24 hours. This will be followed by a build up of 100 hours. The 100 hours was chosen
as the limit of buildup time. The reason is not necessarily to build up for the actual 100 hours,
but it is a duration that, from most practical viewpoints is the limit of most buildups in
production wells due to the production loss. This allows determining the optimum buildup time
in order to find the permeability, skin and distance to fault. The test design option of the
software was then used to calculate different scenarios and ‘what if’ sensitivities. The
simulated pressure history is shown in the figure below.
The defined production history and the model were used to study the sensitivity to the skin by
running various models and make a comparison. From the figure below it can be seen that
infinite acting radial flow will be reached for skin equal -2 at 7 hours shutin time, and skin of 5
at 14 hours shutin time. The combination of skin above 5 and the fault will mask the radial
flow thus permeability cannot be determined with any confidence. The sealing fault is detected
after about 25 hours shutin time, however the doubling of the slope will only happen after the
end of the 100 hours shutin.
The next figure illustrates that when the storage increases above 0.01 bbl/psi and the skin is
maintained at +10, then radial flow will not be reached and the identification of the fault
becomes ambiguous.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p115/558
A final program is issued and contains all the necessary information to allow the field personnel
to safely carry out the well test and assure that all necessary equipment is available.
Specific responsibility levels and who is accountable for what part of the operation is defined in
the document.
Each phase of the operation is identified and durations of flow and shutin periods are clearly
defined. Usually a flow sequence is defined with one single and constant flowrate, this is
sometime confusing and the field personnel may try to achieve such a constant flowrate by
continuously adjusting the choke. To increase the chances of obtaining a solid analysis from
the data it is however recommended to avoid any choke changes during a single defined
flowing period as long as the flow rates are measured.
Finally the program is just that, a program; proper field supervision is of course indispensable
and adjustments to the plan must be continuous as conditions change.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p116/558
Yes
Porosity except
interference
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p117/558
Quality control manuals and procedures had to be presented to prove that every effort was
fulfilled by the involved parties to assure that work, equipment and preparation procedures
were documented and followed. These documents had to be approved by a ‘controlling body’
before the company was qualified as an approved contractor. The QA/QC manuals described
and identified the procedures to follow, backed up by checklists, tests, inspections and
certification by approved inspectorates.
This drive was mainly directed towards offshore construction and sub-sea technology to
optimize equipment performance and the quality of engineering works, including the safety of
oil and gas installations and oil field operations.
Service companies were no exception, but QA/QC was limited to planned and regular
calibration of mechanical and electronic measuring devices. As service companies claim no
responsibility for any loss or damage due to faulty measurements and interpretations, there
was no procedure to validate measured data before analysis.
However methods were developed in the early 1990’s. They allow quality control of downhole
pressure and temperature measurements, and they ensure that the data is valid and usable by
the engineer for further analysis. Such validation increases the confidence in the results of the
analysis and eliminates, to a large degree, errors that could lead to major mistakes in the
decision process of optimum development and production of petroleum reserves.
3.H.2 Background
The introduction of the Bourdet derivative revolutionized the approach to PTA. It gave us
deeper sensitivity and multiplied our analysis ability. It also complicated the diagnostics by
revealing phenomena unseen and not understood until this time.
This sensitivity had a price: as both reservoir response and operational problems affect the
same pressure response, too often the Bourdet derivative response to fluid movements in the
wellbore, phase segregation and temperature anomalies would wrongly be associated to a
reservoir feature and interpreted as such.
It was necessary to enable the engineer to differentiate between the representative data of the
reservoir response and the part caused as a result of a signal from other phenomena.
The techniques described in this chapter were developed as a result of severe interpretation
problems encountered in many fields and led to the introduction of the concept of Differential
Pressure Analysis.
It will be shown how the use of pressure differentials measured between pressure gauges
placed at different levels in the test string can help the interpreter to identify the pressure data
that is valid for interpretation, and enable him to save time by eliminating useless data caused
by anomalies and wellbore phenomena. The method brings ‘Quality Control’ one step further,
and should be an integral part in the overall QA/QC programs of any company dealing with
downhole measurements of pressure and temperature.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p118/558
The study of these differences can reveal the following problems and has a direct impact on
the choice of the data measurements for a valid PTA:
Indentify the movement of the fluid interface movements (water / oil / gas);
By convention the pressure difference between gauges is calculated so that that an increase in
the ‘difference channel’ represents an increase in the fluid density between the gauges sensing
points, and a decrease a reduction of the fluid density, i.e.:
p = plower - pupper
The ‘difference channel’ behaviour is the same whatever the gauge offset. The upper gauge
may well read a higher pressure than the lower gauge, possibly due to a gauge problem or just
because of accuracy, but the ‘difference channel’ would have the same identifiable shape.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p119/558
3.H.4 Basics
The simple analysis is based upon the study of the pressure and temperature differences
between two or more gauges placed in the test string at different depths. The figure below
shows schematically what happens to the pressure at the different sensors, if a ‘gas-oil’
interface is moving downwards and passes the sensors.
The example assumes that any background behavior is following a constant transient (trend)
or is in Pseudo-Steady state (PSS).
Once the gas-oil interface hits the upper sensor, the pressure at this sensing point remains
‘constant’ while the interface moves towards the lower pressure point.
The pressure at the lower sensor declines linearly if the fluid interface movement is constant.
It becomes constant again after the interface has moved below the lower pressure sensor. The
difference in pressure between the two sensing points is represented by the difference in fluid
gradient between oil and gas.
The following illustration represents the ‘difference channel’ between the two sensing points,
and by simple analysis it can be determined what fluid phase changes have caused the
observed phenomenon (see QA/QC).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p120/558
In practice this has proved to be near on impossible although using models that takes into
account changing wellbore storage can sometimes reproduce the observed effects, but with
little added quantitative value. The important task is to identify the problem and discount the
period of perturbation when the data is being analysed by simply ignoring this part of the data
and being able to explain why the data was ignored.
The simple identification that a problem exists will also help the engineer in making static
pressure corrections. The fact that the pressure gauges sensing points are seldom at the level
of the sandface is often overlooked by the interpretation engineer.
Classically, the static pressure (pi, p*, p bar, final build-up pressure) is corrected:
From the gauge depth to the sandface using a static well static gradient.
From the sandface to a common reservoir datum using the reservoir gradient taking into
account any gradient changes.
This correction is usually done manually and is essential to establish reservoir pressure trends,
declines and depletion rates.
In the past the dynamic pressure was not corrected as this involved advanced modeling with
multiphase flow. Today many such flow models are available. The advanced correction of
dynamic pressure data is covered in the chapter on wellbore models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p121/558
This only gives approximate results and it cannot be used for all types of wellbore phenomena.
Nevertheless it enhances the operational understanding and allows for an intelligent correction
to reservoir datum with more confidence. Such analysis will also enhance the confidence in the
material balance calculation and the identification of drainage patterns.
Practically it involves a choice of several recognizable events on the difference channel (blue
circles in the figure below) and transfers the different delta pressure values to a simple
spreadsheet. Estimation of fluid gradients is based on the known fluid densities and resulting
changes in the pressure gradient, that makes the total picture logical by applying a global
gauge offset.
This method does not only reveal wellbore anomalies. It also determines the pressure gauge
offset, which will have to be within the range of the gauge manufacturer’s claimed accuracy
and resolution.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p122/558
Column (2) and (3) are read directly from the difference channel from the quality control plot.
Column (4) is the differential gradient calculated from column (3). Column (5) and (6) are
intelligent guesses or assumptions by the user that will normalize column (8), i.e. the same
implied gauge offset applies whatever the assumed fluid phase is present.
The implied offset is then entered by the user in its appropriate box below the table, and the
residual differences will become close to zero or zero, if the correct assumption as to the fluids
present in the wellbore have been made.
The implied offset becomes the gauge offset which has to be within the gauge specifications to
be acceptable to the operator.
This analysis gives a much better idea as to which of the fluid gradients to use to correct the
gauge pressure to top of sandface.
Another product of this approach is being able to decide which part of the data is really valid
for pressure transient analysis as all data affected by the segregation or fluid movements must
be discounted in the interpretation process. The impact of not knowing and understanding the
wellbore environment is now described.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p123/558
The impact on the Pressure Transient Analysis can be substantial, as illustrated below. The
upper gauge is affected by phase segregation and the shape of the derivative is clearly
distorted, resulting in the wrong choice of interpretation model.
1000 1000
100 100
Fig. 3.I.2 – Lower gauge; homogeneous Fig. 3.I.3 – Upper gauge; double porosity
Gauge drift is caused by unstable electronic components and fatigue of the sensing material
used in the instruments. Strain gauges are particular susceptible to this problem.
Drift during a relatively short well test is uncommon, especially today as the quality of
electronic gauges has increased immensely. However, it still does happen, and a severe drift
will lead to a wrong PTA diagnostic.
The drift problem is more common during long term measurements and can therefore be a real
problem in permanent downhole gauges (PDG).
In any case it is important to check the data for validity through the QA/QC procedures
described in this document before attempting any serious analysis. To identify a drifting gauge
it is necessary to have two or more measurements and study the difference channel
constructed between a reference gauge and all the other gauges.
The following figures illustrate the difference channel and the impact on the analysis of the
drifting gauge id we did not know that such drift exists.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p124/558
100 100
10 10
Fig. 3.I.5 – Match non drifting gauge, Fig. 3.I.6 – Match drifting gauge, with fault
no fault
When different phases are produced the redistribution of fluids in the wellbore at shut-in may
be almost instantaneous and produce what is called ‘gas humping’. This may (seldom) be seen
in a water / oil case also. At shut-in the fluid mixture separates and a total reversal of fluids
occurs in the wellbore. The heavier fluid moves to the bottom of the well and the lighter fluid,
usually gas, accumulates at the top. This can produce a hump in the pressure at shut-in
sometimes above the reservoir pressure.
Only when the two phases are stabilized will the buildup return to a normal pressure difference
and the response may be analyzed.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p125/558
The below figure illustrates how the pressure at the sandface at early time can actually rise
above reservoir pressure. Imagine a well with a well head pressure A. The upper part of the
well is filled with a liquid/gas mixture of a certain weight and in the lower part of the well gas
is predominant (admittedly, a rather theoretical assumption). The gas phase at the bottom is
considered weightless. In between the two phases is a frictionless piston. The gauge pressure
at the bottom of the well is therefore the wellhead pressure plus the weight of the oil mixture,
A+P. Lets now assume that we turn this cylinder upside down, the wellhead pressure stays at
A+P and the bottom hole pressure increases to A+2P. This induces an abnormal rise in the
bottom hole gauge pressure that can be above reservoir pressure.
See the following figure that illustrates the hump caused by this phenomenon. This is a
classical feature in Pressure Transient Analysis.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p126/558
m p 2
p
Gas pseudopressure: dp
0
Z
Using the gas pseudo pressure instead of the pressure, the diffusion equation remains valid
the same methodology presented above can apply.
In addition to these methods, the gas case presents few particularities presented here.
It is necessary first to define the vertical pressure profile in the well. The Saphir/Topaze
internal flow correlations or an actual lift curve generated by some specialized program (i.e.
Amethyste) can be used for this.
The available correlation for gas is an external lift curve or the internal Cullender & Smith
method, but with two modifications for handling water and condensate.
The correlation is based on the gas properties as defined in the PVT setup, and a general
friction factor is calculated using the Colebrook and White equation. Note that when you deal
with a condensate case with equivalent gas gravity and total rates, the proper gradient and
rates are used in the correlation to account for the condensate presence. The presence of
water can be accounted for, based on a constant water to gas production ratio.
The solution selected in Saphir is to include both the hydrostatic and friction pressure loss in
the model and correct the generated model response to the actual gauge depth, and then to
return all the results at the sandface.
In fact, the gauge pressure is not transformed, nor corrected hence it is the model that is
brought to the level of the gauge.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p127/558
You can observe that results (Pi and skin) are now returned at the sandface and that the rate
dependent skin attributed to the formation is now a lot smaller and should be attributed to the
pressure loss through, in this example, the 1000 ft of 1.5”ID tubing.
Once the sandface parameters have been returned by the model match, the sandface pressure
can be properly corrected to a common reservoir datum though the, hopefully appropriate,
knowledge of the fluid gradients in the reservoir.
m( p ) k
Diffusion equation: 0.0002637 2 m( p )
t ct
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p128/558
As soon as we have a pressure gradient in the reservoir, the diffusion term, and especially the
product ct, will become different from one reservoir block to the next.
Although one could consider that it also happens during any well test, this becomes critical
when the reservoir average pressure declines in the reservoir and/or in the well drainage area.
It is therefore necessary to adjust the models or the data.
If we look at a real gas simulation for a long term limit test, or a production survey using
permanent gauges, and use it to match with an analytical model where the diffusion was taken
at initial pressure, we can see a divergence between the simulated pressure and the measured
data, even though the reservoir geometries and the PVT used are strictly the same.
The process starts from an initial estimate of the reservoir initial pressure and volume.
These two parameters are required to calculate the initial gas in place Gi.
At each time step, the cumulative production is calculated and subtracted from the initial
gas in place. A standard p/Z calculation is used to estimate the reservoir average pressure
at this time, to be used to calculate the pseudotime integral, and then the pseudo time.
This pseudotime is used on the loglog plot, and the data is therefore expanded to the right,
allowing the match with a classic closed system type-curve.
This method was useful at the time when one only had type-curves as models. Today
computers can do much better and faster. The main shortcoming was that the initial estimate
of the volume and the initial pressure was made before the model was matched. The model
would give a value of initial pressure and volume, which might not be the same as the initial
guess, hence requiring another modification of the data, another match, and so on. The
process would however, converge quickly.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p129/558
We consider the total gas in place at initial pressure. Vres is the pore volume occupied by the
gas. Tres is the fluid temperature at reservoir conditions. G i is the initial gas in place at
standard conditions.
pi Vres Tsc
So we get immediately Gi: Gi
Z i psc Tres
We now consider, at time t, the same situation after a total cumulative production of Q(t). We
now want to calculate the average reservoir pressure:
Same amount of fluid at standard conditions: psc Gi Q(t ) n(t ) RTsc
p Vres Tsc
So we get immediately Gi: Gi Q(t )
Z psc Tres
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p130/558
As the problem is nonlinear, this requires Saphir NL and the use of a nonlinear solver, but in
the case of single gas this will rarely require more than one iteration. So this sort of model, for
reasonably simple geometries, will be both fast and accurate.
For information, the case presented above is an actual simulation using Saphir NL.
This is illustrated in the figure above. D is called the (linear) non-Darcy flow coefficient.
In order to access the rate dependency it is necessary to conduct a multi flowrate test, these
tests are described in the chapter Well Performance Analysis.
The classical way of determining the skin versus rate relationship is to use semilog analysis of
each flow period if this analysis is valid. Then plot the resulting skin versus the corresponding
rate as illustrated in the figure below. This will return the skin without turbulence ie: the
mechanical skin, the intercept of the assumed straight-line, S0 and the rate dependency D
derived from the slope. Here it is important to remember that a flow/after flow test with only
one single shut-in may not produce the required results as a semilog analysis of a producing
period may be impossible due to the inherent fluctuation of the rates masking the pressure
response where semilog analysis is valid.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p131/558
It is therefore recommended to use the type of tests that have intermediate shut-ins,
isochronal or modified isochronal, where we stand a better chance of obtaining the skin from
the analysis of these shut-ins. The plot of skin versus rate is done automatically in Saphir. The
results can then automatically be transferred and the model generated with regression to
improve the model match as necessary.
If the skin versus rate plot fails because semilog analysis is not valid, the rate dependency
parameters cans still be set in the model dialog using an initial guess. Then the model match
can be refined by regression.
The figures below show the comparison between a simulation run with a constant skin and a
rate dependant skin:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 3 – Pressure Transient Analysis - p132/558
P
u u2
x k
It can be evaluated from the linear assumption described above using ds / dq with:
2rw h
ds / dq
k
0.005
1 Sw5.5 k 0.5
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p133/558
PA and PTA share a large technical kernel, and are often performed by the same engineers.
This was not always the case, and we will start with a short history of PA.
PA started in the 1920s on a purely empirical basis, and as a financial tool. There was no
technical background to these relations, the objective was to find the right decline function
that fit the past history and would be able to assess the US$ revenue in the future.
In the 1940s, the formulation of constant pressure exponential, hyperbolic and harmonic rate
decline was published (Arps, 1945). This was still partly empirical, but some parameters could
be quantified using specific analyses.
In the 1960s came the first series of type-curves, still assuming constant flowing pressure. The
Fetkovich type-curve combined two families of curves: one for the transient flowing period and
one for the late time boundary dominated response. Ten years later Carter extended it to the
gas case. Other type-curves were later published to address further complex configurations
including layered and fractured reservoirs. This was done in parallel to the theoretical work
done in PTA.
At this stage the methodology was somewhat equivalent to the standard procedure in PTA in
the late 1970s. The Arps plot was the counterpart of the Horner plot, and the constant
pressure type-curves were the counterpart of the well test drawdown type-curves.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p134/558
As we have seen the introduction of the Bourdet derivative and PCs dramatically changed PTA
in the 1980s and 1990s. This did not happen as fast in PA, where most work continued to be
done using Arps and Fetkovich methods, generally as side applications linked to the production
databases. Unlike PTA, classical methodology in PA was not phased out. In many mature
reservoirs, permanent gauges cannot be justified economically, and PA methods will stay as
they are, because there are generally no data to justify a more sophisticated approach.
However the theory had evolved in ways akin to those in PTA. Blasingame et al. introduced a
variable rates/variable pressure type-curve as a loglog plot of productivity index vs. material
balance time, complemented by the equivalent of the Bourdet derivative. An adapted version
of the loglog plot, where rate-normalized pressure replaces the productivity index, was also
published. Additional solutions accounted for various well and reservoir configurations. So the
modern tools were available in theory before the end of the 1980s, but they were only recently
implemented in commercial PA applications, such as Topaze.
Advancements in PA started to progress in the late 1990s and early 2000s, partly due to the
development of permanent pressure gauges. When engineers started receiving long term
continuous pressure data, the first reaction was to load this into a PTA program: “I have rates;
I have pressures, so I treat this as a well test”. However PTA methodology was not designed
for this type of data, and engineers would sometimes perform incorrect interpretation by
overlooking specific assumptions that are no longer valid on the time scale of a permanent
gauge survey. Material balance errors and over-simplifications using Perrine’s approach for
multiphase flow property evaluation were, and are, among the most frequently encountered
errors.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p135/558
q t
qi
Q t
qi b
Di (1 b)
qi1b q t
1b
1 bD t
1
b
i
where: q i is the initial rate, Di is the decline factor, and b a parameter varying between 0 and
1, defining the decline type. Three types are usually considered: hyperbolic, exponential and
harmonic.
Exponential decline, b 0
It can be shown that the general decline equation tends to an exponential decline when b
tends to 0:
qi q t
q t qi e Di t Q t
Di
Harmonic decline, b 1
qi qi qi
q t Qt ln
1 D t
i
Di qt
Decline curve equations are applicable only after the transient part of the response has
ceased, i.e. during boundary dominated flow. A general approach consists in the determination
of the three parameters directly, by non-linear regression. The traditional usage however, is
centered on the use of some specific presentations where linearity is sought after the value of
b has been fixed. Practically, the following scales / linearity can be used:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p136/558
Most PA software allows the scale to be set to the above and more. The regression of the
decline is non linear, i.e. it is possible to have the value of b determined from the regression
rather than assuming a value of b linked to the particular scale.
Once the decline parameters have been obtained, and since the analytical expression of the
rate and cumulative are known, it is possible, from a given abandonment rate, to calculate the
corresponding abandonment time, and hence the recovery at abandonment.
qa
The abandonment rate is usually defined as either q a or the ratio .
qi
The abandonment time is noted t a and the recovery at abandonment N p t a . The figure below
illustrates this extrapolation of the Arps plot.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p137/558
Exponential decline is widely used because of the simplicity of the associated graphical
methods. It leads to conservative reserves estimates. Besides, it can be demonstrated that
exponential decline is the late time behavior of a constant pressure production in a closed
reservoir, with a slightly compressible fluid assumption.
With p pi pw
Q = cumulative production
1
and m
Nct
Differentiating the two terms of the equation with respect to the time:
dp dQ dq
m b
dt dt dt
Under constant production well pressure conditions
dp dQ dq
0m b
dt dt dt
We have
dQ
q
dt
Therefore
dq m dq m
q or dt
dt b q b
dq m m
q
dt or ln q t cst
b b
m
q exp( t cst )
b
m
that can be written q qi exp( t)
b
There are many situations however, where the general hyperbolic decline is more adequate.
This is the case in solution gas drive reservoirs.
In our opinion, and given the power of the non-linear regression, it is better to try and
determine all three parameters, including b, systematically.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p138/558
Above all, it is important to stress that decline curves have many limitations:
A refinement can be made for the case where the decline in the oil rate is caused by an
increase in the water cut, mostly in water drive reservoirs with unfavourable mobility ratio. If
one replaces the oil rate by the oil cut, f o the Arps equation can be used for wells with variable
gross production. The same plots can be made and extrapolated: log(f o) vs t, fo vs Q and
log(fo) vs Q.
4.B.2 Fetkovich
In 1980, Fetkovich introduced a type-curve combining the theoretical response of a well in a
closed reservoir, and the standard Arps decline curves. The motivation behind this work was to
come up with a loglog matching technique applicable to both the transient part of the data and
the boundary dominated flow period. By presenting both periods, the Type-Curve would avoid
incorrectly matching transient data on decline curves.
A determining preliminary step was that the exponential decline can be shown to be the long-
term solution of the constant pressure case. The Fetkovich type-curve is derived assuming a
slightly compressible fluid and constant flowing pressure. Extension to gas can be made with
the appropriate dimensionless rate expression, as described below. The original type-curve
presented by Fetkovich displayed rate only. A composite presentation including the cumulative
was later introduced to bring more confidence in the matching process and to reduce the effect
of the noise.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p139/558
In the above figure the left region of the curves (green to blue) corresponds to the transient
part of the response. On the right hand side, are the Arps decline curves (red to yellow). Note
the legend on the left: the red Arps curve is for an exponential decline (b=0), the last yellow
curve is for a harmonic decline (b=1).
The Fetkovich type-curve displays dimensionless values qDd, QDd versus tDd as defined below.
The dimensionless variables can be expressed in terms of the Arps decline curve parameters,
or in terms of the transient response parameters. The duality is due to the composite nature of
the type-curve showing once again a merge of a theoretical closed reservoir response, and the
empirical Arps stems.
Time
0.00634 kt
Dimensionless time: tD
ct rw2
tD
Related by: t Dd
1 re r 1
2
1 ln e
2 rw rw 2
Rate
q t
Decline curve dimensionless rate: q Dd
qi
. q t B
1412
Dimensionless flow rate, oil: q D oil
kh pi pw
50300Tq t psc
p
2 pdp
q D gas ; with m( p)
Tsc kh m pi m pw 0
Z
re 1
q Dd and q D are related by: q Dd q D ln
rw 2
Cumulative production
Q t
Decline curve dimensionless cumulative: QDd
Npi
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p140/558
A match will bring values of re and kh, Di and qi. The type of decline, b is not linked to any of
the match ratios, obtained by selecting the correct type-curve. From the external boundary
distance, the reservoir pore volume can be calculated. From the Arps parameters, the future
performance can be forecast; Npi can be calculated as well as Np for any specified
abandonment rate.
QBg
Gi
Bg Bgi
Where Gi is the gas initially in place and Q the cumulative gas production.
P Pi P
Q i
Z GZi Zi
P
This equation indicates that a plot of versus Q extrapolates to the gas initially in place Gi.
Z
The validity and the accuracy of this method depend directly on the validity of the static
pressure and of the PVT parameters.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p141/558
Broadly speaking, one could say that the introduction of type-curve matching techniques in
production analysis has opened the way to applying methods developed for well test
interpretation to the analysis of production data. The main limitation in the Fetkovich type-
curve is the assumption of constant flowing pressure. Blasingame and McCray noted that using
a pressure normalized flow rate when the bottom-hole pressure varies significantly did not
remedy the problem. They sought functions that would transform the variable
pressures/variable rates solution into an equivalent constant pressure or constant rate
solution. They introduced two specific time functions, t cr the constant rate time analogy, and tcp
for constant pressure. For the liquid case, the constant rate analogy time function is defined as
the ratio of the cumulative and the flow rate:
Q t
t cr
q t
q t
When the normalized rate is plotted versus this function on a loglog scale, the
pi pw t
boundary dominated flow period follows a negative unit slope line:
The Black Oil Pseudo Steady State flow rate equation is:
qo 1
p bo, pss mo, psst
With:
1 Bo
mo, pss ;
Nct Boi
o Bo 1
4 1 A
bo, pss 141.2 ln s ;
kh 2 e C A rw2
Np
and t
qo
qo
When the Pseudo Steady States dominates is function of t at exponent (-1).
p
qo
Therefore a loglog plot of vs t will show a negative unit slope straight line.
p
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p142/558
Np
Note: Periods of very low rates give artificially high values of t then the equation
qo
qo 1 qo 1
tends to the points are found on the same -1 unit slope
p bo, pss mo, psst p mo, psst
straight line.
Based on this result, Palacio and Blasingame introduced type-curves that could be used for
variable flowing pressure conditions. In order to improve the type-curve analysis the Bourdet
derivative was also considered. However, due to the noise inherent to the production data, the
derivative was not applied to the normalized flow itself but to its integral. More precisely, the
Palacio-Blasingame type-curve plot shows the following:
Normalized rate:
qt
PI t
pi pw t
Normalized rate integral:
q
t te
1 e 1
PI Int . PI d p d
te 0 te 0 i pw
PI Int
PI Int. Derivative
ln te
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p143/558
The traditional method of using this presentation is in conjunction with type-curves for a
particular well model (see figure below).
This plot is used as a diagnostic tool, where the data and a model response are compared. The
model can be any model, analytical or numerical, single or multi-well, etc. One can either
display the ‘true’ model response, i.e. the response to the full pressure history, or the
response to a single pressure step. The single step response shows the signature of the model
in a clear, and usable form, whereas the response to the real history is usually very erratic,
because the equivalent time is jumping back and forth in time.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p144/558
pi pw t Q t
In other words for the liquid case, if we plot versus t e on a loglog scale the
q t q t
boundary dominated flow will exhibit a unit slope line, similar to pseudo-steady state in
Pressure Transient Analysis. Furthermore, if we take the derivative of the normalized pressure
with respect to the logarithm of t e , the transient part will exhibit a stabilization at a level
linked to the mobility.
The similarity with PTA is thus complete. Yet, the noise level on the derivative is usually too
high, see the figure above. One workaround is to work with a normalized pressure integral, in
a manner analogous to what was done on the Palacio-Blasingame type-curves.
pi pw
I te
1 te
Integral of normalized pressure:
te
o q
d
I te
Bourdet derivative of the Integral of normalized pressure: I ' te
ln te
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p145/558
Using the integral preserves the signature of the flow regimes while significantly reducing the
noise. Hence such definitions provide a diagnostic tool where most of the usual well test
methods can be used. In particular, it is clearly possible to get an estimate of the reservoir kh
from the derivative stabilization level. The kh being known, one can then get a first estimate of
the reservoir size from the unit slope late time trend. These calculations are an integral part of
the loglog plot. It is possible to either display the ‘true’ model response, i.e. the response to
the full pressure history, or the response to a single pressure step. The single step response,
used in all the figures above, shows the signature of the model in a clear and usable whereas
the response to the real history is usually very erratic, because the equivalent time is jumping
back and forth in time as illustrated in the figure below.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p146/558
They show that the responses corresponding to distinct reservoir sizes all exhibit a straight line
with a negative slope during boundary dominated flow, and all curves converge to the same
value on the X axis, equal to 1 2 . In other words, the following relation is established in all
cases during boundary dominated flow:
1
qD Q DA
2
The expression of the dimensionless variables varies depending of the fluid type and a specific
treatment must be applied in each case.
Oil
For an oil case, the expression of the dimensionless parameters is defined below:
141.2qB 0.8936QB
qD and Q DA
kh p i pw hAc t p i p w
The dimensionless cumulative production can be expressed in terms of the fluid in place, in
STB/D:
hA
N
5.615B
0.8936Q Q
Q DA
5.615 Nc t p i p w 2Nc t p i p w
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p147/558
141.2qB 1 0.8936QB
kh p i pw 2 5.615 Nc t p i p w
Using the full definition of the dimensionless variables requires an a priori estimate of PV,
basically what we are after. Therefore the method presented by Agarwal-Gardner is iterative.
q Q
However we see from the above equation that if we plot versus
pi p w ct pi p w
boundary dominated flow will exhibit a straight line which intercept with the X axis gives
directly N.
Note: In the case of constant flowing pressure, it is interesting to draw a parallel between this
rate cumulative plot and the rate cumulative plot used in traditional decline curve analysis. The
traditional decline methods yield a maximum recovery rather than fluid in place. The relation
between the two methods is established by considering a recovery factor of RF= c t pi p w .
mn ( pi ) mn ( p) 1
i ta b (1)
q(t ) ct Gi
Where the normalized pseudo pressure and equivalent time functions are defined as:
i zi p 2p
mn ( pi )
2 pi pbase z
dp
gicti t q g (t )
(t )
t egas dt
qg 0
g ( p )ct ( p )
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p148/558
ta
i cti ZiGi
q(t ) 2 pi
m( p ) m( p)
i
mn ( p) mn ( pw ) b q(t )
(2)
mn ( pi ) mn ( p)
Create a plot of versus t a
q(t )
As the system goes into pseudo steady state flow, the points will converge towards a straight
line: the intercept at ta = 0 hrs is b.
Having b:
2. p / Z is plotted versus Q:
3. A straight line is drawn through the Pavg/Z and extrapolated to get Gi.
Only problem is that the time function used by the first plot, ta involves the reservoir average
pressure hence it requires an estimate of the reserves.
(2) By selecting a time range where the system is believed to be in pseudo steady state, the
software performs an automatic regression to determine b using equation (1) and the method
defined above.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p149/558
(3) Then the straight line method will be applied in the plot p / Z versus Q, Pi/Zi can be input
to find Gi (STGIIP).
Like most of the methods extrapolating a behavior the constraint is that the well status and
production conditions must be constant in the interval used for the analysis.
Kabir et al. presented a cartesian plot of P vs q which provides a simple way to perform a
diagnosis.
dpwf 0.2339 Bq dq
dq hct A dt
In a closed system, the rate has an exponential decline, therefore, the slope dp/dq will be
constant and function of the drained volume.
That allows diagnosing the various behaviour types on a p vs q plot of any well field data:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p150/558
Then, the adequate subset of points can be selected to be used in the corresponding methods.
It also allows comparing the behaviours from wells to wells in order to detect a possible
compartmentalization: if the plot exhibits two different slopes in the P.S.S., that tends to
demonstrate that they are depleting two different compartments.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p151/558
The original methods specific to gas (and only to gas) production analysis are directly
presented in the chapters ‘Old Stuff’ and ‘Right Stuff’.
m p 2
p
Gas pseudopressure: dp
0
Z
Using the gas pseudo pressure instead of the pressure, the diffusion equation remains valid
the same methodology presented above can apply.
In addition to these methods, the gas case presents few particularities presented here.
m( p) k
0.0002637 2 m( p)
t ct
When the pressure varies significantly, the term ct and therefore the diffusion term varies.
If we introduce the pseudotime:
t ct i
t n (t ) I ( pwf ( ))d I ( p)
0
where
p ct p
m( p) k
The diffusion equation becomes: 0.0002637 2 m( p)
t n ct i
Note: the use of the normalised pseudo time requires the knowledge of the average pressure
at each step, that makes its use difficult.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p152/558
They demonstrated that the real gas constant pressure solution, when plotted using
normalized time, exhibits a boundary-dominated period that matches the exponential decline.
The major drawback however, is that the normalized time expression requires an a priori
knowledge of the average pressure, hence of the reserves. The method is thus iterative in
nature.
Note: if the normalized time is not used a gas response in depletion mode will not necessarily
follow the exponential decline.
It is important to remember that the Fetkovich type-curve is based, for the depletion part, on
the Arps decline curves. Like the decline curves it suffers some limitations:
- It is assumed that the bottom-hole pressure is fairly constant. Fetkovich suggests that if
the pressure is smooth, and uniformly decreasing, one could use a p normalized rate.
- The well behavior is assumed constant, e.g. no change in skin with time.
The drainage area of the considered well is constant, i.e. the producing behavior of
surrounding wells must also be stabilized.
The cornerstone of the Blasingame plot is the linearity between the normalized rate
q t 1
and during boundary dominated flow. This relation, valid for a slightly
pi pw t te
compressible fluid, does not apply to gas unless the rate is normalized with respect to m p
and the time is modified as follows:
gicti t qg (t )
qg (0) 0 g ( p )ct ( p)
tegas dt
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p153/558
qo 1
In the PSS oil flow rate equation , the slope mo , pss is a function of the fluid
p bo, pss mo, psst
compressibility, which is highly pressure dependent in gas cases.
The objective is to keep the linearity and the PSS flow rate equation for gas in the same shape
as for oil:
qg 1
with a constant slope mg , pss
p p bg , pss mg , psstegas
We take the varying viscosity and compressibility into account by introducing the ‘pseudo’
pressure and ‘pseudo’ normalized time:
gi zi p p
pp
pi pbase g z
dp
gicti t q g (t )
(t )
t egas dt
qg 0
g ( p )ct ( p )
1
The slope is then: mg , pss
Gct
The intersect becomes:
gi Bgi 1 4 1 A
bg , pss 141.2 ln 2
s
kh 2 e C A rw
The consequence of not using this time function is that the linearities expected during the
various flow regimes may be distorted.
It is not necessary in the Blasingame plot since this plot only provides a basis for comparing
the data and a model response calculated for the particular case. Handling the non linearities
is the model responsibility, not that of the plot. If the model is representative, the model and
the data will be consistent. So it does not matter whether such or such period exhibits a
linearity or not.
This is different when using a type curve as the type curve embeds modelling assumptions.
That is why this time function is used in Topaze in the match with the Blasingame type curve.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p154/558
1
qD Q DA
2
Provided that the dimensionless rate and cumulative production be defined as:
qD
1422.T .q
and QDA
4.50.T .zi .Gi . m( pi ) m( p)
khm( pi ) m( pw ) hA. pi .m( pi ) m( pw )
Unlike the oil case, we cannot find a simple expression of a normalized cumulative that is
independent of the fluid in place. This is because the gas in place is involved in a non-linear
fashion in the expression of the dimensionless cumulative. However by extension with the
previous method for oil we can choose to plot:
q
versus Q DA
Gi . m( p i ) m( p)
m( p i ) m( p w ) m( p i ) m( p w )
Gi . m( pi ) m( p)
@ Intercept
.h. A. pi
PV . pi
PV . pi. . psc .T
PV
Gi
m( pi ) m( pw ) 4.50.2. .T .zi 4.50.2 .T .zi 4.50.2 .T .Bgi .Tsc. pi Bgi
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p155/558
4.D.3.d Flowing gas material balance plot and P-Q diagnostic plot
Like the P/Z vs q plot, these methods, specific to gas, and not ‘adapted’ oil analyses methods,
are presented in the chapter ‘The right stuff’.
It is necessary first to define the vertical pressure profile in the well. The Saphir/Topaze
internal flow correlations or an actual lift curve generated by a specialized program
(Amethyste) can be used for this.
The available correlation for gas, in Topaze, is an external lift curve or the internal Cullender &
Smith method, but with two modifications for handling water and condensate.
The correlation is based on the gas properties as defined in the PVT setup, and a general
friction factor is calculated using the Colebrook and White equation. Note that when you deal
with a condensate case with equivalent gas gravity and total rates, the proper gradient and
rates are used in the correlation to account for the condensate presence. The presence of
water can be accounted for, based on a constant water to gas production ratio.
The solution selected in Topaze is to include both the hydrostatic and friction pressure loss in
the model and correct the measured pressure to the sandface depth.
An important consequence is that the dependent skin attributed to the formation can be a lot
smaller because a large part is now attributed to the pressure loss through.
The diffusion equation can be considered linear for as long as the diffusion terms left outside
the time and pressure variables remain constant. Diffusion equation:
m( p ) k
0.0002637 2 m( p )
t ct
As soon as we have a pressure gradient in the reservoir, the diffusion term, and especially the
product ct, will become different from one reservoir block to the next.
If we look at a real gas simulation for a long term production survey and use it to match with
an analytical model where the diffusion was taken at initial pressure, we can see a divergence
between the simulated pressure and the measured data, even though the reservoir geometries
and the PVT used are strictly the same.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p156/558
The model includes a reservoir size and an initial pressure. So the initial gas in place can be
calculated as an integral part of the model. At any time step the algorithm calculates the
average pressure from the cumulative production using p/Z, and replaces the viscosity and
total compressibility in the superposition by the one coming from the average pressure. So at
any time step the simulated pressure is coherent with the material balance of the model. The
optional derivation is shown as follows:
We consider the total gas in place at initial pressure. V res is the pore volume occupied by the
gas. Tres is the fluid temperature at reservoir conditions. Gi is the initial gas in place at
standard conditions.
pi Vres Tsc
So we get immediately Gi: Gi
Z i psc Tres
We now consider, at time t, the same situation after a total cumulative production of Q(t). We
now want to calculate the average reservoir pressure:
Same amount of fluid at standard conditions: psc Gi Q(t ) n(t ) RTsc
p Vres Tsc
So we get immediately Gi: Gi Q(t )
Z psc Tres
The use of a numerical model is, conceptually, even simpler. As the gas equation is entered at
the level of each cell, the material balance is automatically honoured, not only globally, as
above, but at the level of each cell. Solving the problem numerically is by far the most rigorous
approach.
As the problem is nonlinear, this requires Topaze NL and the use of a nonlinear solver.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p157/558
A rate dependant skin model may be used in the pressure and rate history simulation:
S0 and ds / dq
These values result from a necessary well test data analysis. The methods are developed in
the Chapter ‘PTA - General Methodology’.
In a numerical model the (non linear) non-Darcy flow effect is included in the flow equation
through the value of the (non linear) non-Darcy flow coefficient, which appears in the
Forchheimer equation:
P
u u2
x k
It can be evaluated from the ‘Rate dependant skin’ linear assumption described above using
ds / dq with:
2rw h
ds / dq
k
0.005
1 Sw5.5 k 0.5
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p158/558
Modern Production Analysis is based on the use of PC based PA software products. The key for
any modern software is to combine user friendliness to a powerful technical kernel, requiring
both analytical and numerical capabilities. In terms of methodology, the central diagnostic
tools are the Blasingame and loglog plots, which are used whenever such a diagnostic is
possible. However, because of the very scattered nature of production data, the ultimate
diagnostic tool will often be the history plot, where the coherence of the model and the data, in
terms of simulated pressures, rates and cumulative productions, will be the final decision tool
for the interpretation engineer.
Once the interpretation is initialized and production data loaded, the first task will be to extract
the interval of time on which the analysis will be performed. If the pressures are not available,
only the ‘old’ tools can be used. If both rates and pressures are available, the interpretation
will be performed with the four main diagnostic tools. The interpretation engineer can select
one or several analytical and/or numerical models, set their parameters and generate these
models for comparison with the actual data. For models that are believed applicable, the
engineers can refine the model leading parameters, either manually or by using nonlinear
regression.
Once this is finalized, the engineer may use the model(s) to forecast future production by
defining a scenario of producing pressures. The user can then run a sensitivity analysis on a
selection of the model parameters.
The path described is the default path when all is well. In reality, for complex problems, it
becomes a trial-and-error process where the interpretation engineer may decide to go back
and forth as needed when part of the process is unsatisfactory.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p159/558
Required for
Type of data
PA
Old stuff: No
Pressure history
Right stuff: Yes
Porosity Yes
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p160/558
The load option imports flat ASCII files, allows manual input, copy-paste from spreadsheets,
and increasingly input through links to databases or intermediate repositories using advanced
filtering tools.
After the load, the cumulative production is automatically calculated by integration of the
production history, and is displayed on the history plot together with the rate. Pressures is
loaded and displayed in the history plot.
Quality control is not as critical as in PTA, because wellbore effects are normally not dominant,
except when the pressure is recorded at surface. In this case, the validity of the well intake
curves used to correct the pressure to sandface during extraction can become a potential weak
point.
Beyond the usual cleaning of irrelevant data and the correction of load errors, the main
challenge is to end up with at least one coherent, synchronized set of rate and pressure data.
To get there the engineer may have to perform the following tasks:
If rates are not loaded from a file, create the rate history by identifying the pressure
breaks and get the rate values from hard copy reports.
Refine the production history, when the time sampling of rate measurements is too crude.
Conversely, if the production history goes into useless detail, simplify the rate history to
reduce the CPU time required to run the models.
ARPS plot
Fetkovich type-curve plot
Fetkovich plot
Blasingame plot
Loglog plot
Normalized rate-cumulative plot
At extraction time the option to invoke a defined lift curve or flow correlation to correct the
pressure profile from the measurement depth to sandface can be chosen.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p161/558
The loglog plot (see below) is used for diagnostic purposes to identify the two main flow
regimes hopefully present in production data, infinite acting radial flow (IARF) and pseudo
steady state (PSS). The pressure match is fixed to coincide with a stabilization of the
derivative of the normalized pressure integral and the time match is fixed to the unit slope line
of PSS at late time. The pressure match and the time match are adjusted by click and drag of
the mouse. The loglog plot is linked to the Blasingame and the Fetkovich plot so any change in
the loglog match is mirrored in the others. In case the data is of high quality and the sampling
frequency is high enough it is sometimes possible that more than the IARF transient develop
thus extending the diagnostic possibilities to approach those of PTA and both well and
reservoir models can be recognized in the test data. This is however rare in low frequency data
typically used in production analysis.
If the loaded pressure history contains any decent build-ups with high frequency pressure data
or a link to a database that allows the repopulation of this data without a filter then the
interpreter is in luck. This data can be transferred to a PTA module to determine all of the
classical parameters including the model and these can be transferred back to the PA package
and finally the modelling can begin; adjusting for parameters that can typically change over
the longer time intervals involved in production analysis (i.e. skin).
Fig. 4.E.1 – Match on the Loglog plot Fig. 4.E.2 – Blasingame plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p162/558
The principle of non linear regression is to use numerical optimization to refine the parameter
estimates by minimizing an error function, generally the standard deviation between the
simulated and real values at selected times. The most commonly used optimization algorithm
is Levenberg-Marquardt, but there are many variants. The engineer has the possibility to run
with some or all the leading parameters of the model and he can also fix the upper and lower
limits of the allowed parameter variation. The data points on which the error function will be
calculated may also be controlled. See figure below:
4.E.6 Forecast
Once the model has been selected a production forecast can be easily performed by defining a
flowing (producing) pressure scenario. See the figure below illustrates this with a constant
pressure production for 100 days.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p163/558
Typically, professional analysis reports are generated with two possible set-ups:
A header document, from a word processor, with some ‘copy-paste’ of plots and results
from the PA software, but with most of the ‘mechanical’ report delivered as an annex,
An integrated document, typically from a word processor, where some plots and tables are
dynamically connected to the PA software using some OLE or COM automations. The
advantage of this solution is that it is much more flexible. Once a model template has been
defined, the reporting process will get shorter and shorter from one analysis to the next.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p164/558
Hours, Days,
Time range Weeks, Months, Years
sometimes Weeks
Whatever volume of
Reservoir areas of interest investigation during the test Well or group drainage area
and/or the shut-in
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p165/558
Modern PA and PTA share a similar path. After loading, synchronizing and extracting data, one
first tries to run a diagnostic using specialized plots and straight lines. An analytical or
numerical model is then run, and an optimization process adjusts the parameters to minimize
the difference between the simulated model response and the observed data.
The main regime of interest in PA is Pseudo Steady State (PSS). We look primarily for a unit
slope on the Loglog or the Blasingame plot. Specialized analysis will determine the size of the
well drainage area from the slope, and the intercept will be a function of three main factors:
the well productivity index, the mobility and a shape factor. More complex models could be
used, but there may not be enough information to determine the additional parameters.
However the pressure transient results may be used to determine these.
Production history may be so scattered that the responses will be dominated by transients. In
this case there is no way to identify pseudo steady state behavior. This may happen even
though the well is still producing and the pressure is declining globally.
Despite the lack of pure PSS behavior it will be possible with a model to history match the data
and obtain a reliable drainage area estimate and even sometimes discriminate mobility, skin
and shape factor. No specialized plot will show such a behavior. So the use of models and
optimization is likely to change the way PA is performed completely, even more radically than
happened with PTA.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 4 – Production Analysis (PA) - p166/558
PTA can provide a clean snapshot of what the well and reservoir system is at a given time. PA
covers a much wider time range, and some of the assumptions valid during a single well test
will not be true over the well producing history. The three main differences are related to the
well productivity, the drainage area and multiphase production.
PTA models account for rate-dependent skin. It is also known that the well may be cleaning up
during the initial production phase. So the well productivity may not be constant during a well
test. However this is a reasonable assumption for a single build-up, and optimization will be
possible with a single mechanical skin model. In PA this is unlikely. Well productivity does
change over time, and no optimization process is reasonably possible over a long period of
time without considering a time-dependent skin.
In PTA, boundary effects are generally material boundaries, even though interfering wells can
produce the same effects as boundaries. In PA we consider the well drainage area. Except
when there is only one producing well, part or all of the drainage area boundaries are
immaterial, depending on the flow equilibrium between the neighboring wells. The drainage
area will change in time when new wells are produced, or even when the flow rates change
asymmetrically. To account for these changes, a multi well model, either analytical or
numerical, may be required.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p167/558
5 – Wellbore models
OH – OSF – DV
5.A Introduction
Until we are able to beam the fluid directly from the pore space into the ship cargo bay we will
need to use this route called the wellbore. Wellbore effects are seen very differently,
depending where you stand:
For the Pressure Transient Analysts anything related to the wellbore is a nuisance. Wellbore
effects will spoil the early part of the pressure response, and may even persist throughout
the whole test or shut-in survey.
Production Analysts are a little luckier, because they work on a time scale where transient
effects are not that important, and addressing wellbore effects amounts to connecting a lift
curve. In fact, playing with the lift curves and implementing ‘what if’ scenarios is part of
their jobs.
The steady-state component of wellbore effects is a key element of the well productivity. It
may be modeled using lift curves, or VLP curves, and this in turn requires flow correlations
that are present in both Production Logging and Well Performance Analysis, a.k.a. Nodal
Analysis™ (Trademark of Schlumberger).
Correction to datum may be either applied to the data in order to correct the real pressure
to sandface, or integrated in the model in order to simulate the pressure at gauge level.
Correction to datum and integration of VLP curves are detailed in the PTA (QA/QC) and the
Well Performance Analysis chapters of this book.
The transient component of wellbore effects often ruins the life of the PT-Analyst. The
action at the origin of a sequence of flow (opening and shut-in of a valve, change of a
choke) is occurring at a certain distance from the sandface, and any wellbore volume
between the operating point and the sandface acts as a cushion. This induces a delay
between what we want to see and what effectively occurs at the sandface.
This chapter deals with the modeling of some of the simplest transient wellbore models,
and is mainly applicable to Pressure Transient Analysis only.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p168/558
As introduced in the ‘Theory’ chapter, the wellbore storage introduces a time delay between
the rate we impose at the operating point (typically the choke manifold at surface) and the
sandface rate. The wellbore storage equation was introduced in the ‘Theory’ chapter:
pwf
Wellbore storage equation: qsf qB 24C
t
Not surprisingly, the constant wellbore storage model assumes that the wellbore storage factor
C is constant. The below figure illustrates the behavior of the sandface rate during the opening
and shut-in of a well.
At a point in time, and in the absence of any other interfering behaviors, the Derivative will
leave the unit slope and transit into a hump which will stabilize into the horizontal line
corresponding to Infirnite Acting Radial Flow. The form and the width of the hump is governed
by the parameter group Ce , where S is the Skin factor.
2S
The horizontal position of the curve is only controlled by the wellbore storage coefficient C.
Taking a larger C will move the unit slope to the right, hence increase the time at which
wellbore storage will fade. More exactly, multiplying C by 10 will translate the curve to one log
cycle to the right.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p169/558
The early time straight line corresponding to the pure wellbore storage is given by:
qB
Wellbore Storage Straight line: p t mt
24C
So one can get the wellbore storage constant with:
qB
Specialized plot result: C
24m
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p170/558
The value of C has a major effect, which is actually exaggerated by the logarithmic time scale.
You can see on the linear history plot that all responses seem to be the same, however.
When the influence of wellbore storage is over both the pressure change and the derivative
merge together. Wellbore storage tends to masks infinite acting radial flow on a time that is
proportional to the value of C. Wellbore storage will also tend to mask other flow regimes that
can be present in a well test. Early time well responses such as linear, bi-linear, spherical and
hemispherical flow will disappear if the storage effect is considerable. Effects of heterogeneous
reservoirs can also be masked by wellbore storage. The wellbore storage effect on other well
and reservoir models are covered in the individual chapters of these models.
Wellbore storage does not affect the late time pseudo steady state response.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p171/558
A classic example is gas. When the well is flowing the pressure in the wellbore will decrease,
and the gas compressibility will increase. In this fixed volume this will result in an increase of
the wellbore storage parameter. The opposite will occur during the shut-in, where the increase
of pressure will result in a decrease of the wellbore storage. Though it occurs in any gas test,
this behavior will be visible, and become a nuisance, in the case of tight gas, where the high
pressure gradient in the formation results in a high pressure drop in the wellbore.
Another typical example is an oil well flowing above bubble point pressure in the reservoir. At a
stage (sometimes immediately) there will be a point in the wellbore above which the pressure
gets below bubble point. In this place the oil compressibility will be progressively dominated by
the compressibility of the produced gas, hence an increase of the wellbore storage which will
evolve in time.
In both cases, the wellbore storage will be increasing during the production and decreasing
during the shut-in.
Other sources of changing wellbore storage may be various PVT behaviors, change of
completion diameter of a rising or falling liquid level, phase redistribution, falling liquid level
during a fall-off, etc.
In some cases the wellbore effect will be so extreme that any modeling is hopeless. In this
case the engineer will focus on matching the derivative response after the wellbore effect has
faded, accepting that the early time response cannot be matched and may induce a
(cumulative) incorrect value of the skin factor.
There are three main ways today to model changing wellbore storage:
The figures below illustrate increasing and decreasing wellbore storage as modeled by the
Hegeman model of changing wellbore storage.
The matching consists in setting the wellbore storage straight line on the FINAL value of
wellbore storage, pick a second position corresponding to the INITIAL value of storage, and
then pick the median time when the transition occurs. The initial model generation will seldom
match the response perfectly, but this model, combined with a robust nonlinear regression,
has the capacity to adjust to virtually any single trend early time response.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p172/558
In practice, the Hegeman model is sharper and has more capabilities to match real data. This
is related to the choice of transition function and does not mean that this model is physically
better. Actually it does not mean that ANY of these models are correct, and they should be
used with care for the following reasons:
The models are just transfer functions that happen to be good at matching real data. There
is no physics behind them. They may end up with an initial, final storage and transition
time that makes no physical sense.
These models are time related. There will be a wellbore storage at early time and a
wellbore storage at late time. This is not correct when the model is pressure related. In the
case of production, the real wellbore storage at early time will correspond to the storage at
late time of the build-up, and the reverse. So, the superposition of a time related solution
will be incorrect on all flow periods except the one on which the model was matched. This
aspect is often ignored and/or overlooked.
These models are ‘dangerous’ to the extent that they work beautifully to match ‘anything
that goes wrong’ at early time, even when the use of such model is not justified. They are
the early time version of the radial composite model at intermediate time. Actually,
combining changing wellbore storage and radial composite will match any rubbish data.
t ps t I pwf d I p
t 1
Pseudo-time: where
0 p ct p
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p173/558
The following figures show a loglog response before and after pseudo time correction. The use
of pseudo time is detailed in the chapter on ‘Gas’.
This method modifies, once and for all, the data to match the model, and not the opposite.
This excludes, for example, the possibility of comparing several PVT models on the same
data. The method was the only one available at the time of type-curve matching, where
models were limited to a set of fixed drawdown type-curves.
In order to calculate the pseudotime function one needs the complete pressure history.
When there are holes in the data, or if the pressure is only acquired during the shut-in, it
will not be possible to calculate the pseudotime from the acquired pressure. There is a
workaround to this: use the pressures simulated by the model, and not the real pressures.
This amounts to the same thing once the model has matched the data, and there is no
hole. However it is a bit more complicated for the calculation, as the pressure at a
considered time requires the pseudotime function, and vice versa.
This is by far the most relevant way to simulate pressure related wellbore storage. The figure
below illustrates a buildup matched with the changing wellbore storage model (Hegeman), the
extracted buildup corrected for pseudo time and matched with this model, and the match with
the non linear numerical model with pressure dependent wellbore storage.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 5 – Wellbore models - p174/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p175/558
6 – Well models
OH – OSF – DV
6.A Introduction
The geometry of the well, its trajectory in the formation, the way it is completed and/or
stimulated, have a major impact on transient responses and the long term well productivity.
For the PT-Analyst, in most cases the well model will dominate the transient response after the
wellbore storage has faded and before IARF is established.
The flow geometry around the well may create characteristic flow regimes, with a specific
signature of the Bourdet derivative and linearity on a given specialized plot. Some long lasting
flow geometries may also be identified by the Production Analyst.
After radial flow is reached the well model will produce a total equivalent skin, which can be
calculated using the standard straight line method (MDH, Horner, Multirate semilog). This total
skin is the sum of the ‘real’ skin, resulting from the well damage or stimulation, and a
geometrical skin which will define the difference in productivity between the current well
geometry and a fully penetrating vertical well.
However textbook responses are not that frequent. Wellbore effects may hide some specific
well regime. Some well configurations, such as horizontal wells, will have a longer term
impact, will be sensitive to heterogeneities and will seldom exhibit the expected behavior.
The problem of identifying and quantifying well behaviors is not trivial, and it is not expected
to become any better in the years to come. Additional developments are very likely to blur the
message even more:
- In extremely low permeability formations the geometry of the well may affect the pressure
response for years, and IARF may never be reached for the practical duration of the well
life. This is the case of fractured horizontal wells in shale gas formations, and it is
developed in the unconventional gas chapter of this book.
- Increasingly complex and ‘intelligent’ completions, multi-drain wells, may be a dream for
the production engineer but they are a nightmare for the analyst. The sensitivity of the
solution to local heterogeneities, the lack of unique solutions, and the absence of any pure
flow regime make the analysis almost impossible.
For these reasons, what is taught in this chapter may become less and less relevant when this
sort of configuration becomes the norm. When no formal analysis technique is available we will
end up using these complex models for test design and productivity design, and we will use
history matching with a very limited number of diagnostic tools.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p176/558
This is the model used to derive the basic equations in the chapter on ‘Theory’. This model is
sometimes called ‘wellbore storage & skin’, reference to the original type-curves of the 1970’s.
The reason is that the two only parameters affecting the loglog plot response will be the
wellbore storage and the skin factor. However, wellbore storage is a wellbore effect and skin is
used in complement of all other models.
So we will stick to the term ‘vertical well’, considering that ‘fully penetrating’ is the default
status of a vertical well, otherwise the model will be called ‘limited entry’ or ‘partial
penetration’ (see other sections in this chapter).
The behavior of a vertical well in a homogeneous infinite reservoir has already been presented
in previous sections and is shown here as a reminder. The loglog and semilog plots below show
the response for various values of the skin factor (S).
On the loglog plot, the shape of the derivative response, and with a much lower sensitivity the
shape of the pressure response, will be a function of the group C.e 2S.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p177/558
The shape of the hump, which originally was set to CDe2.Skin when dealing with type curves, is
actually a function of C and rwe-Skin.
162.6qB k
pt pi logt log
2
3.228 0.8686. Skin
kh ct rw
162.6qB kh
pt logt log loghct 3.228 2 log rw e Skin
kh
One can see that the slope is a function of k.h/, and that there is a constant term that shows,
among other things, the residual effect of rwe-Skin, and ct.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p178/558
dP 0.005
u u2 ,
dx k 1 Sw5.5 k 0.5
The exact problem is nonlinear and can be solved numerically, as described in chapter on
‘Numerical models’.
However, it can be simplified and analytical models can be used with a fair approximation by
adding a rate dependent component to the skin factor. Under these conditions, the effective
skin factor S’ for a given rate q will be given by:
dS
S ' S0 Dq or S ' S0 q
dq
D is called the non-Darcy flow coefficient. In order to assess the rate dependency the skin has
to be solved for several rates.
In gas well testing the most common method is to plan for an isochronal or modified
isochronal test, but this is done mainly to determine the deliverability of the well. Such a test
procedure includes multiple buildups after different flowrates and the engineer can then profit
for ‘free’ the fact that the buildups can be analyzed for skin and define the rate dependency.
This can then be used in the model.
The following figures illustrate the loglog plot with four buildups with different skins and the
corresponding history match using a constant skin in the model. It can be seen that the match
is not consistent with the measured data.
Fig. 6.C.1 – Loglog plot Fig. 6.C.2 – History match, constant skin
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p179/558
The next figures illustrate the ‘skin versus rate’ plot, which is used to evaluate the rate
dependency and the skin if no turbulence was present. From this we can determine a rate
dependent skin model. Finally we see n the history match plot that the model is consistent
with the measured data.
Fig. 6.C.3 – Skin vs. rate plot Fig. 6.C.4 – History match, rate dependant skin
The most common example of a well model transformation is when a well is subject to fracture
stimulation. Typically the well is damaged and no fracture is intersecting the well during the
first status or time period of a well test. Then the well is subject to the ‘frac’ job and the well
model is changed to fracture model. It is possible to model this using a time dependent well
model. The history is divided into time periods with a certain status, and each status can have
the following definitions:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p180/558
The figure below illustrates the loglog match of the well behavior before and after a fracture
stimulation job. The model matches the whole history of the test sequence including the
fracture treatment.
Fig. 6.C.5 – Changing well loglog match Fig. 6.C.6 – Changing well history match
These effects may be modeled by allowing the mechanical skin to vary over the well life. This
may actually be combined with a rate dependent component, though solving for both may
often be an under-specified problem.
In Saphir, time dependent skin is just a subset of the changing well model where each status
is assigned a skin. The figure below illustrates a well that is cleaning up matched with a
constant and changing skin model.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p181/558
At the opposite, fracturing requires a mechanical stress induced by the resistance to the flow,
and will be typically performed in low permeability formations. An extreme case today is the
production of shale gas formations, which only occurs through series of fractures created along
a horizontal drain. In a fracture job the bottom hole pressure rises above the fracture gradient
of the formation, but within the pressure range allowed by the completion. Once the fracture is
initiated, the bottom hole pressure is kept while a proppant such as sand or ceramic beads is
included in the fracturing fluid. As the fracture ‘screens out’, the induced fracture faces remain
open.
Rock mechanics suggests that in most cases the fracture has symmetrical ‘bi-wing’ geometry.
The model we use in well testing assumes that the fracture wings are two perfect rectangles,
each of length Xf, the half fracture length. For fully penetrating fractures the rectangles height
is the formation thickness.
There are two main types of fractured models: the high, or ‘infinite conductivity’ and the ‘finite
conductivity’. In the high conductivity we assume that the pressure drop along the inside of
the fracture is negligible. In the low conductivity case we simulate diffusion within the fracture
(see specific section).
There are two main high conductivity fracture models: the Uniform Flux model assumes a
uniform production along the fracture. The Infinite Conductivity model assumes no pressure
drop along the fracture. The latter is the one that makes sense physically. The former was
initially developed because it is pretty straightforward to generate analytically. It is also used
with a ‘trick’ (see below) to simulate the infinite conductivity response without the CPU cost of
the true, semi-analytical solution.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p182/558
6.D.2 Behavior
At early time only the part of the reservoir in front of the fracture will significantly contribute
to the well production, orthogonal to the fracture plane. This is what we call the linear flow,
and this is a characteristic feature (see figure below).
This linear flow is a particular case of a flow through a section of constant area A. Other
examples of such flow are late time linear flow between two parallel faults. When such flow
occurs there is a linear relation between the pressure change and the square root of the
elapsed time, given by the following relation:
8.12 qB t
p
Area kct
Where ‘Area’ is the flowing section in ft². In the case of a fracture, the flowing section is the
area of the fracture rectangle, so Area = 2Xfh. We then get:
The flow will then progressively deviate from the early linear flow while the rest of the
formation starts impacting the well production, and the area of investigation becomes elliptical.
When the production continues the ellipse grows into a large circle and we reach Infinite Acting
Radial Flow. At this stage the fracture behaves like a standard well with a negative skin.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p183/558
The figure below shows the normalized density of flow from the formation into the fracture.
The red curve corresponds to the Uniform Flux fracture. At very early time the infinite
conductivity solution will follow this profile. However this changes rapidly until it stabilizes at
late time to the blue curve, showing that most of the flow at late time comes from the tips of
the fracture, which are more exposed to the rest of the formation than the center part.
Fig. 6.D.4 – Flow profile along the fracture at early and late time
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p184/558
From the previous section, the pressure change during the early time linear flow is:
16.52 q 2 B 2 1
p m t where m
h 2ct kX 2f
In the equations above, all parameters are inputs except the permeability and the fracture half
length. During the linear flow, the result of any analysis will therefore provide a relation
between permeability and fracture half length by determining the value of kX f².
dp dp m 1 1
p ' t t m t p
d ln( t ) dt 2 t 2 2
On a decimal logarithmic scale this writes:
1
log( p) log( m) log( t ) and log( p' ) log( p) log( 2)
2
The early time flow regime of a high conductivity fracture is characterized on a loglog plot by a
half unit slope on both the pressure and derivative curves. The level of the derivative is half
that of the pressure. At later time there is a transition away from this linear flow towards
Infinite Acting Radial Flow, where derivative stabilizes (see figure below).
The position of these two half slope straight lines will establish a link between the time match
and the pressure match, providing a unique result for kXf². Fixing the stabilization level of the
derivative will determine the value of k, and the half fracture length will be calculated from
kXf². If there is no clear stabilization level the problem will become underspecified.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p185/558
These solutions differ only slightly when plotted on a loglog scale. Purists consider that the
uniform flux solution is physically incorrect and only the infinite conductivity solutions should
be used. In real life the uniform flux transients generally offer a better match, and this can be
explained by the fact that the productivity of the uniform flux fracture, for a given length, is
slightly lower than the infinite conductivity, and that this, possibly, better simulates the slight
pressure losses in the fracture.
The Uniform Flux model was published because it was fairly easy to calculate. The infinite
conductivity fracture was solved semi-analytically (at high CPU cost) but it was shown that an
equivalent response could be obtained by calculating the (fast) uniform flux solution at an off-
centered point in the fracture (x=0.732.Xf). This position corresponds to the intercept of both
flow profiles, as shown in the ‘behavior’ section.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p186/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p187/558
From the previous section, the relation between pressure and time during the linear flow is:
16.52 q 2 B 2 1
p m t where m
h 2ct kX 2f
16.52 q 2 B 2 4.06 qB
kX 2f and / or Xf
m 2 h 2ct mh kct
In a specialized plot (except the linear plot) it is possible to replace a time function by its rate
superposition. For example we could use the tandem square root plot in the case of a build-up.
This is the case when we assume that the flow regime we are studying is actually the
superposition of these flow regimes. We do this for Infinite Acting Radial Flow or late time
effects such as the linear flow between parallel faults. This does NOT apply here, or to any
other early time behavior, and we should use the pure time function, not the superposed one.
The only exception may be for shale gas, where the ‘early time’ behavior actually lasts years,
and where superposition may be applicable.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p188/558
Wellbore storage will affect the early time data by masking the linear flow. If the storage effect
is high enough no fracture flow may be diagnosed from the loglog plot and the interpreter can
no longer justify that a fracture may exist if the total skin is not highly negative. It will also
become increasingly more difficult to make a choice between a low or high conductivity
fracture, and the half fracture length Xf can no longer be determined from the square root plot.
If a fracture job had been done on the well this could be an indication that the job had not
been very successful or that the fracture had propagated up or down rather than laterally.
The below figure illustrates the effect of wellbore storage on the linear flow in the loglog plot.
qsf
pSkin 141.2 ST
kh
where pSkin is the pressure difference between our data and the response of a standard,
undamaged, fully penetrating vertical well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p189/558
When we generate a high conductivity fracture model with no specific damage or stimulation
other than the fracture itself, and then perform a IARF straight line analysis, we will get a
negative skin. This negative skin is not due to any pressure loss or gain at the sandface, it is
just due to the geometry of the well. This is what we call the Geometrical Skin SG. We can
also link this geometrical skin to the equivalent wellbore radius. This is the radius of a
theoretical vertical well that would have the same productivity as the fracture. The geometrical
skin and equivalent radius depend on our choice of fracture model:
Xf Xf
Uniform Flux : SG ln rweq 37% X f
2.718 rw e
Xf Xf
Infinite Conductivity : SG ln rweq 50% X f
2.015 rw 2.015
Another way to present this is that an infinite conductivity fracture with a half length of 200 ft
will have the same productivity of a theoretical vertical well with a radius of 100 ft. If the well
radius is 0.3 ft, the geometrical skin SG will be -ln(333)= -5.8.
The figure below shows the equivalent wellbore radius for an Infinite Conductivity Fracture
(blue) and a Uniform Flux Fracture (orange).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p190/558
The Standard Skin considers that we keep the standard vertical well sandface as the flow
area through which the skin factor is applied. This is as if the skin pressure drop was occurring
at the well and not at the fracture sandface.
AFlow 2rwh
Because the reference areas are the same, under this convention the component skins add up,
and the total equivalent skin ST will be:
ST SG SM
and AFlow 4X f h
Because the reference areas are different, under this convention the component skins will need
a normalization to add up, and the total equivalent skin ST will be:
rw
ST S G SM
2X f
In Saphir v4.12 there was a third way, where h as a reference thickness was replaced by Xf,
but it does not make any sense physically and will be removed in v4.20.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p191/558
6.E.2 Behavior
In the absence of storage, the first flow regime is linear flow along the fracture axis, this
simultaneously induces a linear flow orthogonal to the fracture, the amplitude of which
changes along the fracture length, i.e. there is a non-uniform flux into the fracture, in contrast
to the high-conductivity models. This bi-linear flow regime, with linear flow along two axes,
gives rise to a pressure response proportional to the fourth root of time. Both the loglog and
the Bourdet derivative plots exhibit quarter slopes during bi-linear flow. Bi-linear flow is
followed by the usual linear flow, characterized by a 1/2-unit slope on the loglog.
The bi-linear flow regime is usually happening at very early time, and is not always seen. It
represents the time at which the pressure drop along the fracture is significant, and in reality
this time is very short. Even when there is no storage effect, the data sometimes does not
exhibit a ¼-slope and can be matched directly with a high-conductivity fracture model.
However, the general model for an ‘induced fracture’ fractured well must be the finite-
conductivity fracture model, as there will always be a pressure drop along the fracture,
however small. This is however, not significant compared to the linear pressure drop in the
reservoir into the fracture.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p192/558
There are two additional parameters that needs to be specified in this model; the fracture
width (w) and the fracture permeability (kf), in fact it is the permeability thickness of the
fracture that is specified (kfw).
When the fracture conductivity is very high, the model approaches the infinite-conductivity
response, with a ½-slope developing immediately. Conversely, with low kfw the pressure drop
along the fracture is significant almost to the onset of radial flow (IARF). When such flow
occurs the relationship between the pressure change and the fourth root of elapsed time is
given be the flowing relationship:
44.11qB
p t1 / 4
h k f w ct k
1/ 4
44.11qB
p m4 t where m
h k f w ct k
1/ 4
dp dp m 1 1
p ' t t 4 m4 t p
d ln( t ) dt 4 t 4 4
1
log( p) log( m) log( t ) and log( p' ) log( p) log( 4)
4
During bi-linear flow the pressure change and the Bourdet derivative follows two parallel
straight lines with a slope of one quarter (1/4). The level of the derivative is a quarter of that
of the pressure change.
This is followed by the onset of linear flow and the pressure change and the Bourdet derivative
follow then two parallel straight lines of half slope (1/2) with the level of the derivative half
that of the pressure change.
When radial flow is reached we have the usual stabilization of the derivative curve.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p193/558
With very low fracture conductivity the linear flow will not develop and the bi-linear flow will be
dominating until the unset of infinite acting radial flow.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p194/558
In practice, massive hydraulic fracturing is only common in very low permeability formations.
We typically encounter this sort of stimulation in Tight Gas, Shale Gas and Coal Bed Methane
(CBM), type of reservoirs. This topic is covered in the chapter on ‘Unconventional Reservoirs’.
It suffices to say here that these types of wells would not be economically viable or would not
produce at all without this sort of stimulation. It is now becoming popular not only to induce
one single fracture in the well, but horizontal wells are selectively ‘fracced’, and multiple
individual low conductivity fractures may exist. It goes without saying that the analyst’s task
can become more than just challenging.
It is easy to imagine that for a period subject to pressure transient analysis the time to reach
infinite acting radial flow will be prohibitively long, and in some cases the transient will never
reach this flow regime at all (well, maybe after thousands of years). Thus one can understand
that lacking some of the flow regimes the interpretation of massive hydraulically fracced wells
can be quite difficult.
It is therefore always recommended that a pre frac test be carried out to determine the
permeability thickness product of the formation (kh).
44.11qB 1 aB
2
m and k f w 1945
h k f w ct k ct k hm
1/ 4
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p195/558
In fact, the analyst is often faced with this dilemma. From a design point of view the fracture
job may be judged successful as we know what we pumped and the proppant disappeared, so
the fracture must be in the ground. But often, the frac will propagate up or down, and parallel
short fractures maybe induced thus masking the very long fracture that was designed for. It is
just not manifesting itself in the bottom hole pressure data.
The figure below illustrates the effect of wellbore storage on the early time behavior.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p196/558
qsf
pSkin 141.2 ST
kh
where pSkin is the pressure difference between our data and the response of a standard,
undamaged, fully penetrating vertical well.
The geometrical skin expresses the relation between the half fracture length and the skin for
kf w
values of 300 and:
kX f
Xf Xf
SG ln rweq 50% X f
2.015 rw 2.015
The Standard Skin considers that we keep the standard vertical well sandface as the flow
area through which the skin factor is applied. This is as if the skin pressure drop was occurring
at the well and not at the fracture sandface.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p197/558
AFlow 2rwh
Because the reference areas are the same, under this convention the component skins add up,
and the total equivalent skin ST will be:
ST SG SM
The reference areas are different, under this convention the component skins will need a
normalization to add up, and the total equivalent skin ST will be:
rw
ST SG S M Influence of skin
2X f
The figure below illustrates the influence of fracture skin on the behavior of the pressure
change and the Bourdet derivative. There is no effect on the derivative but the pressure
change will no longer show the quarter slope behavior of bi-linear flow as the pressure change
become flatter when the fracture skin increases.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p198/558
The partial penetration effect also comes into play with all sorts of formation testing tools
where the point of production can be just one single small ID probe, thus the pressure
measurements maybe affected by the partial penetration spherical flow regime right through
the measurement time. This particular test type is covered in the chapter on ‘PTA - Special test
operations’.
6.F.2 Behavior
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p199/558
In theory, after wellbore storage, the initial response is radial flow in the perforated interval hw
(see figure above), shown as ‘1’. This ‘stabilization’ gives the product krhw (the subscript r
stands for radial) and it can be imagined that if there were no vertical permeability this would
be the only flow regime present before the influence of any lateral boundaries. In practice this
flow regime is more often than not masked by wellbore storage.
In flow regime ‘2’ there is a vertical contribution to flow, and if the perforated interval is small
enough a straight line of slope –½ (negative half slope) may develop in the Bourdet derivative,
1
corresponding to spherical or hemi-spherical flow. The pressure is then proportional to .
t
The relation of the pressure change and the ‘one over the square root’ of the elapsed time is:
70.6qB 2453qB ct 1
p
ks rs ks3 / 2 t
With:
k s k r kv
2
1/ 3
Finally, when the diffusion has reached the upper and lower boundaries, the flow regime
becomes radial again, and the stabilization now corresponds to the classical product, krh.
In any model where there is a vertical contribution to flow, there must also be a pressure drop
in the vertical direction, and vertical permeability has to be considered along with the radial
permeability. The pressure drop due to the flow convergence (flow regime 2, spherical flow) is
a ‘near-wellbore’ reservoir effect caused by the anisotropy. If the spherical flow is seen in the
data it may be possible to separate the ‘damage’ and ‘geometric’ components of the total skin
determined from the second IARF period.
1 2453qB ct
p m where m
t ks3 / 2
dp dp m 1 1
p ' t t m t p
d ln( t ) dt 2 t 2 2
The characteristic flow regime is spherical flow until upper and lower boundaries have been
reached and then followed by radial flow in the reservoir.
The interesting flow regime here is spherical flow during which time the pressure change is
1
proportional to , the Bourdet derivative will follow a negative half unit slope straight line.
t
kv
From this flow regime it is possible to determine the anisotropy . The below figure illustrates
kr
the behavior of a well with partial penetration.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p200/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p201/558
The following figure shows the difference between spherical and hemi-spherical flow.
With a high enough vertical permeability the spherical flow may not be seen at all; this is also
dependent upon the ratio hw/h, the fraction of the producing interval that is contributing and
the level of wellbore storage. As kv decreases the negative half slope spherical flow derivative
becomes increasingly evident. The geometrical skin also increases.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p202/558
2453qB ct
m
ks3 / 2
3
k k
The anisotropy can then be determined v s
kr kr
This is valid when the contribution interval is such that spherical flow develops. If the open
interval is close to the upper or lower boundaries then hemispherical flow will develop and the
slope of the specialized plot has to be divided by two.
1
The below figure illustrates the specialized plot of pressure change versus .
t
The following figure illustrates the effect of increasing wellbore storage on a limited entry well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p203/558
A variety of correlations has been presented in the literature to estimate the geometrical skin.
In Saphir the geometrical skin is determined by the difference between the model skin and the
model skin for a fully penetrating vertical well with no skin damage, thus the various skin
components adds up directly:
SM ST SG
At this stage one has to be careful about the convention of how the skin is defined. There are
two ways to define the Skin factor in limited entry well model, and these are accessible in the
Saphir settings page, Interpretation option, Skin tab:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p204/558
The skin can be referred to the perforated interval and the skin components needs to be
normalized to add up, the total equivalent skin ST will be:
h
ST S M SG
hw
There are often questions about how positive a skin can be in reality. The reality has to be
seen in the proper light, that is with the appropriate reference. Imagine a test carried out with
a 2 inch probe of a formation tester. The skin is determined to be 1 with reference to the
probe dimension. If the test had reached radial flow a classical semilog analysis would reveal a
skin (total skin) of 600 calculated with the appropriate net drained interval of 100 ft. So yes,
very positive skin values may exist, the trick is to know what they mean.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p205/558
After the fact, the reason is pretty straightforward: the reality is far more complex than what
we model. When dealing with vertical wells, we are making the same mistakes, but there is an
enormous and fortuitous averaging effect taking place around the vertical well, and the
response will be close to what theory predicts. In other words, we were lucky in the start when
we used simplistic models for vertical wells and Infinite Acting Radial Flow and it worked (just).
No such luck with horizontal wells, as the response is very sensitive to the assumptions that
we make, whether it is the homogeneity of the formation, the part of the horizontal well that
will produce effectively (the part of the horizontal section that is contributing to the
production), the well geometry (the first rule of horizontal wells being that they are not
horizontal) and multiphase flow behavior in the wellbore.
This warning is to say, yes fine, there are horizontal well models, and they give some
theoretical behavior that will be described in this section. In the next section you will find
‘textbook’ data and some not so ‘textbook’ but overall you will realize that we can indeed make
sense of the majority of data sets and achieve consistent analysis results. When we cannot, it
is better to explain why and avoid at all costs inventing all sorts of schemes to try to explain
the unexplainable.
6.G.1 Hypothesis
The well is assumed to be strictly horizontal, in a homogeneous formation which is also strictly
horizontal and of uniform thickness h. As a start, we will consider that the reservoir is isotropic
in the horizontal plane, but we will allow vertical anisotropy and other parameters to be
defined as in the limited entry well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p206/558
6.G.2 Behavior
The first flow regime, often obscured by wellbore storage, is pseudo-radial flow in the vertical
plane, analogous to radial flow in a vertical well (see below figure). The average permeability
combines the vertical and a radial (horizontal) component with horizontal anisotropy. In most
cases the horizontal anisotropy is ignored and the permeability combination is just that of
vertical and radial permeability. The thickness corresponds to the producing well contributing
length. The level of the horizontal derivative, or the slope of the semilog straight-line, is
therefore:
khearly hw kv kr
Fig. 6.G.2 – Main flow regimes: Pseudo-radial, linear flow and radial
If the vertical permeability is relatively large, the geometrical skin will be negative and the
second flow regime is linear flow between the upper and lower boundaries. The Bourdet
derivatives will follow a 1/2-unit slope.
khlinear kr hw
2
When the vertical permeability is small the geometrical skin becomes positive and the behavior
of the second flow regime will be similar to that observed in limited entry wells.
The final flow regime is radial flow equivalent to that in a vertical well, with the second
derivative stabilization representing the usual kh if the reservoir is considered isotropic.
khlate k r h
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p207/558
Looking end-on into a horizontal well is equivalent to looking down a vertical well. The first
flow regime after storage in a vertical well is radial flow, and in a horizontal well the same
applies. However due to the anisotropy the flow around the wellbore is not circular, but
elliptical, as the diffusion will typically propagate more slowly in the vertical direction. Had the
reservoir been totally isotropic in all directions then the diffusion around the horizontal well
would be perfectly radial.
Once the diffusion has reached the upper and lower boundaries the flow becomes linear (if the
geometrical skin is negative), equivalent to the parallel faults geometry in a vertical well but
because of the finite length of the horizontal wellbore it cannot stay linear forever. Eventually
the diffusion has reached sufficiently far from the wellbore that the dimensions of the
horizontal section become irrelevant, and the flow again becomes radial, equivalent to radial
flow in a vertical well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p208/558
If the well is closer to one or the other boundary, there will first be a doubling of the
derivative, as if seeing a fault in a vertical well, before the second boundary brings on the
linear flow. The following figure illustrates this.
If the upper or lower boundary is a gas cap or an aquifer, the well will probably be positioned
close to the other sealing boundary. In that case there will again be a doubling of the
derivative, similar to the ‘fault’ response in a vertical well, followed by a constant pressure
response. In each case the doubling of the derivative will not always be fully developed before
the arrival of the next flow regime, be it linear flow or a constant pressure boundary.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p209/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p210/558
A variety of correlations has been presented in the literature to estimate the geometrical skin.
In Saphir the geometrical skin is determined by the difference between the model and the
model for a fully penetrating vertical well with no skin damage.
ST SM SG
The infinitesimal skin SM is constant at the wellbore; however several references can be used
to normalize the different skins.
h
ST S M SG
hw
And including the anisotropy:
h kr
ST S M SG
hw kv
The figure below illustrates the influence of the skin on the behavior of the pressure change
and the Bourdet derivative. There is no effect on the derivative as long as the skin is 0 or
positive. If the skin SM becomes negative then both the pressure change and the derivative will
mask the early time radial flow and the shape at will approach the behavior of the fractured
well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p211/558
In addition we know that most ‘horizontal’ wells are not horizontal at all and in almost all real
cases they cut through various dipping layers.
Unfortunately there is not much we can do to produce miracles. The challenge is to recognize
that a flow regime has been masked by another and if not, make the right diagnostics.
When a flow regime is missing in the response we have to rely on the ‘known’ parameters that
were discussed in a previous chapter of this document. And, we are in luck, Saphir has some
tools that are very useful and can help us produce a complete analysis with confidence even if
the data recorded is incomplete.
Further, if we assume that seismic and log data is complete and that we have already good
knowledge of expected horizontal contribution (hw), vertical drainage (h) and the ratio of
anisotropy, the below outlines a procedure that will help in the analysis of incomplete data.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p212/558
qB
(khw )early hw kv .kr 162.6
msemi log
h k
and the skin: S S r
M hw
k ,k ,h
k
v r w v
with: h=hw
We can also trace the ‘early time radial flow’ line on the loglog plot to determine
(khw )early hw kv .kh , and SM directly.
‘Knowing’ hw and kv/kr we estimate quickly what kr should be if the ‘known’ parameters are
right. We also ‘know’ the pressure match: PM 1.151kr h / 162.6qB and can quickly place the
infinite acting radial flow line on the right level on the loglog plot.
Fig. 6.H.2 – Loglog plot with ‘early time’ line and pressure match
A plot of the pressure change versus t will return the length of the contributing drain (hw)
if k= kr and h=h. The special ‘Channel’ line in the loglog plot will also return this parameter.
qB
2 2
h
kh w 16.52 , m is the slope of the straight line in the pressure versus t plot.
2 mh ct
If we match the IARF line with the ‘early time’ radial flow and we replace the value of h with h w
the line in the square root plot and the specialized line, ‘Channel’ in the loglog plot, will return
kv
the vertical net drained, h as follows: h ( L1 L2 ) Model
kr
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p213/558
This gives us a double check that our approach is consistent despite the missing data. The
below plots illustrates the square root plot and the ‘Channel’ special line in the loglog plot.
Finally we have enough information to be able to generate the model with the appropriate
‘known’ and ‘unknow’ parameters. The model match is illustrated below.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p214/558
In addition we can use a ‘poly’ line to set the level of ‘early’ radial flow, the linear and the ‘late’
radial flow. Just using the input of the ‘known’ parameter and transferring the ‘poly’ line
results to the model we see that the results from the line gives a model very close to the data.
A quick ‘improve’ of the leading parameters of the model will give us a perfect match with very
reasonable results, see the figures below.
Fig. 6.H.6 – Loglog plot with ‘poly’ line Fig. 6.H.7 – Loglog plot with model
from ‘poly’ line
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p215/558
For those interested in the particular problem of production analysis in unconventional gas
reservoir, it is further developed in the KAPPA shale gas papers: The Analysis of Dynamic Data
in Shale Gas Reservoirs – Part 1 and Part 2 (OH et al.).
In a tight matrix environment, the production from the interface drain - reservoir is negligible
compared to the multiple fractures contribution. Consequently, the productivity index will be
less sensitive to the drain damage than to the fracture quality. This will be illustrated below.
After the effects of wellbore storage, linear or bilinear flow behavior develops due to the
fractures (1/2 or 1/4 unit slopes or both), then the response may correspond to radial flow in
the vertical plane orthogonal to the horizontal borehole, with an anisotropic permeability
k kv kr , this flow regime is usually always masked by the fracture influence.
As the top and bottom boundaries are sealing, the response shows the behavior of a vertical
well between two parallel sealing faults and the derivative should follow a positive half unit
slope during this linear flow, however there is no easy way to determine if the half slope is
caused by the fractures or the horizontal wellbore between the upper and lower boundary.
At later time stabilization in the derivative is observed corresponding to infinite acting radial
flow (IARF) in the horizontal plane relative to kh.
6.H.3.a Sensitivity
The below figure illustrates the behavior with various well configurations, flow through the
fracture only, through the drain only and through both. It shows clearly that the contribution
of the drain is negligible.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p216/558
The horizontal well behavior is totally dominated by the presence of the fractures, the typical
shape of the horizontal well disappears and is replaced by the dominant behavior of the low
conductivity fractures with the characteristic shape of bi-linear (¼ slope) and linear (½ slope)
flow. The rules discussed in the previous section will no longer apply.
The Horizontal fractured wells behavior is therefore dominated by the fracture quality, fracture
half length, conductivity and number. The spacing also plays an important role in the behavior
as seen in the following figure that show the influence of the length of the horizontal drain with
equal number of identical fractures. The shorter the drain, with the equal number of fractures,
the steeper the transition from bi-linear to infinite acting radial flow becomes and the less the
overall shape resembles that of a horizontal well.
Fig. 6.H.12 – Equal number of identical fracture, sensitivity to horizontal drain length
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p217/558
The figure below illustrates the influence of the half length of the fractures. In this case there
are four fractures at equal spacing. When the half fracture length is small it is possible to
observe the classical ‘early time’ radial flow of the horizontal well behavior. This is however the
rare occasion when we deal with multiple fractured horizontal wells.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p218/558
6.H.3.c Skin
When a situation with multiple fractures in a horizontal well is facing us we are invariably up
against low or ‘no’ permeable rock. In most cases we are dealing with ‘unconventional gas’.
Thus the damage of the actual well(bore) becomes a non issue.
The skin will not influence the derivative, thus we will retain the diagnostic tool, and be able to
recognize the flow regimes present in the signal when no other influence is present, such as
high or changing wellbore storage and phase segregation.
The figure below illustrates the multiple fracture model with the influence of skin.
The second well is in an area where the vertical permeability is very low. This gives rise to the
typical but unorthodox behavior of a horizontal well with positive geometrical skin.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p219/558
We have seen in low vertical permeability formations that horizontal wells are not very
efficient.
An answer to this problem is the slanted well which maximizes the communication length with
the formation while crossing it totally. Even zero vertical permeability formations are
connected to the well over the complete thickness.
Generating analytical solutions to simulate and study such wells is quite easy. The challenge is
to select the adequate model and to define the various corresponding parameters.
Common questions are: “my well is deviated, do I have to consider it as a slanted well?” or
“my well is sub-horizontal, do I analyze it as a horizontal or slanted well?”, and then the series
of questions: “Which thickness do I use if my formation is not horizontal, do I use the
measured depth or the true vertical depth?” In addition the slanted well can either be fully
penetrating or with a selected interval perforated or open to flow. The well will invariably cross
different layers so this will also have to be considered in the solutions.
The answer to these questions is simple after understanding that we do not analyze the
formation geometry but the pressure behavior, therefore the parameters or the criteria we use
are the parameters influencing this behavior. The selection of the model is imposed by the
shape of the pressure curve, not by the geometries.
Of course, the diagnostic will not tell you if the well is vertical, slanted or horizontal, you are
already aware of that (hopefully). It will tell you how it behaves and how it has to be treated.
The following sections illustrates the typical sequences of flow regimes and pressure behaviors
that characterize the different well types.
Analytical solutions respect certain assumptions such as the negligible effect of gravity on fluid
flow. This means that the analytical slanted well model describes the cases of slanted wells in
a horizontal formation, or if you want, a vertical, slanted or a horizontal well in a slanted
multilayer formation. The parameter to consider is the angle between the well and the main
flow direction.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p220/558
h is the net drained thickness, perpendicular to the flow direction (not systematically TVD
or MD).
6.I.1 Behavior
There are, like in the horizontal well, some main flow regimes that can develop. If the angle
with the vertical is large, the well is approaching a horizontal well. The dip of the layer is such
that the well follows the formation stratigraphically, then the equivalent behavior will be that
of a horizontal well and three distinct flow regimes may develop.
‘Early time’ radial flow in the plane normal to the well, this regime is usually masked when the
slant is such that the well is close to a vertical well. Wellbore storage will also in most cases
mask this behavior.
Linear flow between an upper and lower boundary if the well angle approaches the horizontal.
The below figure illustrates the loglog behavior of a real slanted well where all the main flow
regimes have developed.
The below figure illustrates the response of a fixed contributing length of a fully penetrating
vertical well, the same penetration at an angle of 45 degrees and the equivalent horizontal
well response. One can easily see that the horizontal well with the same penetration length as
the vertical well is a poorer well due to the anisotropy.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p221/558
The next figure illustrates the behavior when the well is fully penetrating from top to bottom
boundaries with various inclination angles. The reservoir in this case is isotropic. It is possible
to observe the phenomenon caused by the end effects; flow at the extremities of each of the
slanted wells, as well as the ‘early time’ radial flow hw kr kv equivalent to the response in a
pure horizontal well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p222/558
The following is a comparison of a horizontal well in a highly anisotropic rock with various
slanted wells, all with the same contributing length. It can be seen that crossing the reservoir
with a well at an angle is advantageous because of the anisotropy.
As the permeability contrast increases the efficiency of the well decreases. The figure below
shows a slanted well with increasing permeability contrast.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p223/558
Finally we show a limited entry well with various inclinations, due to the anisotropy the vertical
well is the best choice. Spherical flow develops in all the wells, and lasts longer as the
inclination approaches the horizontal well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p224/558
6.I.4 Skin
The skin has no effect on the derivative so none of the features (shapes) particular to the
slanted well is lost. This is of course in practice not true as most wells will also be influenced
by wellbore storage, and worse still changing wellbore storage and/or phase segregation in the
wellbore.
Nevertheless, below is illustrated the effect of skin only on the slanted well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p225/558
The individual layer parameters are defined by a fraction of the total thickness h,.the
permeability thickness product, kh and the storativity, cth. The anisotropy is also specified for
each layer.
The interlayer crossflow is determined by the Lambda value, which can be automatically
calculated.
6.I.5.b Behavior
The following figure illustrates the classical behavior of the double permeability response but
using this model (see the chapter on ‘Reservoir models’). The fully penetrating slanted well
model at 80° is compared to the vertical well. The longer slanted well is of course the best
producer. The reservoir is isotropic.
The following figure illustrates the response of a three layered reservoir; the well is fully
penetrating, vertical and slanted at 80°. The reservoir is isotropic. The double feature of the
transition due to unequal heterogeneity parameters in the different layers can easily be seen in
the fully penetrating vertical well. This is masked with the slanted well fully penetrating at 80°.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p226/558
The following figure show the responses with a horizontal well placed in a high permeably layer
communicating with a low permeability layer and the well placed in the low permeability layer
communicating with the high permeability layer. It is evident that placing the well in the high
permeability layer is an advantage to the short term production.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p227/558
We know that in general the horizontal wells are not strictly horizontal and in real life
invariably the wells will cut through various layers on its way through the reservoir. This is just
a fact of life and a problem the interpretation engineer is faced with constantly. Very often he
will find to his dismay that the interpreted horizontal contributing length is much shorter than
what had been predicted and even measured by production logging and that the anisotropy
does not make any sense at all.
This could very well be caused by the fact that the well is in fact cutting through distinct layers
and cannot be analyzed as if the horizontal well drained one single homogeneous medium.
To illustrate this we have generated the slanted well model traversing various no
communicating layers. It can be easily seen from the below figure that if the response from
several layers is interpreted using the classical horizontal well model draining one single layer,
the interpreted contributing length (h w) would be too small, and the anisotropy would probably
show a vertical permeability larger than the horizontal.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p228/558
6.J.1 Hypothesis
Another answer to the quest for better productivity and sweep are the multilateral maximum
reservoir contact wells. They consist of multiple drain holes, each drain can be aimed at a
specific layer. For this purpose the drain holes may be drilled at different elevations and
directions.
This is the production engineer’s dream and the reservoir engineer’s nightmare. If ‘smart well
completions’ have not been installed only total well pressure is available, reflecting the
‘average’ behavior of all the drains/layers. The determination of the individual layer and drain
characteristics is therefore impossible through a single information source.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 6 – Well models - p229/558
6.J.2 Behavior
Specific cases may allow analysis and the diagnostic of well-known behaviors. As an example;
the pressure behavior of two lateral drains at 180° is similar to the behavior of a horizontal
well of length equal to the sum of the two drains. The classical sequence (see horizontal wells)
can be observed: infinite acting radial flow in the vertical plane, linear flow then, at late time
horizontal radial flow.
The below figure illustrates the response of three different multilateral configurations.
In fact, the analytical model for multilateral wells simulates perfectly the sum of each drain
behavior, but it does not permit a diagnostics that could be useful to understand detailed drain
properties.
Testing lateral drains individually would permit the determination of individual drain properties.
Aggregating this discrete information to simulate the global behavior is one possible approach.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p231/558
7 – Reservoir models
OH – OSF – DV
7.A Introduction
In Pressure Transient Analysis (PTA), reservoir features are generally detected after wellbore
effects and well behavior have ceased and before boundary effects are detected. This is what
we might call an intermediate time response. In Production Analysis (PA), reservoir behavior
will be detected when a change in the production sequence is clear enough to produce
transients before Pseudo-Steady State is reached. So for the production analyst, reservoir
behavior will relatively be an early time effect that may or may not be seen.
The main parameter we are looking for is the mobility of the fluid in the reservoir, k/. When
there is a doubt about the effective reservoir thickness, the parameter calculated is kh/.
When the fluid viscosity is known and assumed to be constant we can calculate the
permeability-thickness product kh. Whatever the variant, and whether we are performing
pressure transient or production analysis, this result will be quantified by the pressure match
on the loglog (both PTA and PA) and Blasingame (PA) plots. This result will be common to all
models described below, whether they are homogeneous or heterogeneous.
In PTA, some well configurations will allow reservoir characterization even at the early time of
the pressure response. Early time behavior of limited entry wells will be a function of an
equivalent spherical permeability that in turn will depend on the reservoir vertical anisotropy.
Also the early time response of a horizontal well will involve an X-Z permeability which in turn
also depends on the reservoir horizontal anisotropy. Fractured well early time behavior will
depend on both fracture length and conductivity and so on.
The second factor will be the reservoir storativity cth. Generally this parameter will be an
input, but in the case of an interference test this will be a result of the interpretation process,
generally giving the porosity and assuming the other two factors as ‘known’.
Finally, one will want to characterize the reservoir heterogeneities by using and matching
heterogeneous models. Such heterogeneities can be local, for example the double-porosity
case, vertical as in layered reservoirs, or areal, in the case of composite systems and any
combination of these. It is possible to input heterogeneities such as porosity, permeability and
thickness maps into a numerical model, or generate and upscale a geostatistical model. In this
case, quantifying the heterogeneities will be useful to correct our assessment of the long term
reservoir potential.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p232/558
S Skin
At early time the pressure response is dominated by the well models described in the chapter
on ‘Well models’, the most common early time responses are thus:
Wellbore storage, linear flow (high conductivity fracture), bilinear flow (low conductivity
fracture), spherical flow, horizontal well (linearity after early time radial flow). These regimes
are coupled with a hump caused by the storativity and the skin.
In addition we have the line source well with no skin or wellbore storage used for the analysis
of interference tests.
In fact, the reservoir response in a homogenous reservoir is simply the linearization of the
pressure with respect to the logarithm of time, infinite acting radial flow (IARF) is established
and the Bourdet derivative stabilizes and is flat at a level related to the permeability.
The below figure shows a schoolbook example of the response of a wellbore storage and skin
homogeneous model in an infinite reservoir. If this was a common behavior the task of the PTA
engineer would have been easy indeed.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p233/558
The below figures illustrates the various homogenous behaviors on a loglog plot commonly
seen in pressure transient analysis. The line source solution is also shown.
And following we illustrate the semilog behavior of wellbore storage and skin in a
homogeneous reservoir.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p234/558
Storage: Skin does not change the position of the early time unit slope (pure wellbore storage)
but affects the amplitude of the hump. A larger skin will produce a larger hump, hence
delaying the time at which Infinite Acting Radial Flow is reached.
IARF: Once IARF is reached, the skin has no effect on the vertical position of the derivative,
but has a cumulative effect on the amplitude of the pressure.
PSS: Skin does not have an effect on the time at which PSS is reached or on the derivative
response at the end. However the cumulative effect on the pressure remains and all responses
‘bend’ and remain parallel when PSS is reached (see history plot below).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p235/558
7.B.1.b Permeability
The figure below presents the response with a variable permeability. Values for k are 2, 5, 10,
20 and 50 mD.
Storage and IARF: The derivative responses have the same shape but they are translated
along the wellbore storage line of unit slope. When the permeability is higher, the reservoir
reacts faster and deviates earlier from pure wellbore storage. The level of stabilization of the
derivative, i.e. the slope of the semilog plot, is inversely proportional to k. For this reason the
responses diverge on the semilog plot, the different slopes being inversely proportional to k.
PSS: At late time all derivative signals merge to a single unit slope. This is linked to the fact
that permeability has no effect on the material balance equation.
Fig. 7.B.8 – Influence of the reservoir permeability, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p236/558
The effect of a change in the wellbore radius is strictly the same as the consequence of a skin
change: Early time amplitude of the derivative hump, no middle time and late time effect on
the derivative, but a shift in the pressure that stays constant once wellbore storage effects are
over. The equivalence between wellbore radius and skin is hardly a surprise, as skin can also
be defined with respect to an equivalent wellbore radius. The well response is in fact a function
of the equivalent wellbore radius rwe = rw.e-Skin.
Fig. 7.B.10 – Effect of wellbore radius rw, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p237/558
7.B.1.d Porosity
The figure below presents the response by varying the porosity. Values for are 3%, 10% and
30%.
Storage and IARF: Porosity behaves like the skin or the well radius. A smaller porosity
produces a higher hump on the derivative but does not change the derivative IARF level. The
equivalence between porosity and skin is used in two different areas. In Interference tests the
Skin has a marginal influence, and the pressure amplitude is used to assess the porosity.
Hydrogeology: Hydrogeology will assess a value of skin (generally zero) and use the absolute
value of the pressure change to assess the Storativity S, i.e. the porosity.
For a given reservoir size, the time for PSS is proportional to . Underestimating the porosity
by 10% will provide an overestimation of the reservoir bulk volume of 10%, and therefore an
overestimation of the boundary distance. The total pore volume will remain correct.
Fig. 7.B.12 – Effect of the reservoir porosity, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p238/558
Fig. 7.B.14 – Effect of the total compressibility, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p239/558
7.B.1.f Viscosity
The next figure illustrates the response with variable fluid viscosity. Values for are 0.2, 0.5,
1, 2 and 5 cp. If we compare the response with the Fig. 2.H.8 illustrating the effect of a
permeability change (above), we see that the sensitivity to viscosity is exactly opposite to the
sensitivity to permeability. At early time (Storage) and middle time (IARF), the derivative
responses have the same shape but translated along the wellbore storage line of unit slope.
When the viscosity is lower, the reservoir reacts faster and deviates earlier from pure wellbore
storage. The levels of stabilization of the derivative and the semilog slopes are proportional to
. At late time all derivative signals merge to a single unit slope. In other words, the sensitivity
on 1/ is the same as the sensitivity to k on all parts of the response. This means that we have
another governing group with k/ also called the mobility.
Fig. 7.B.16 – Effect of the fluid viscosity, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p240/558
7.B.1.g Thickness
Illustrated below is the response computed with a varying net drained thickness. Values for h
are 20, 50, 100, 200 and 500 ft.
Storage and IARF: Changing the thickness has a similar effect to changing the permeability
and an effect opposite to changing the viscosity. In other words, the governing group that
defines the early time response, apart from wellbore storage and skin, is kh/.
PSS: Unlike permeability and viscosity, the reservoir thickness also has an effect on the late
time material balance calculation. Also, the time at which the derivative deviates from IARF
towards PSS does not change, and therefore the influence of the thickness on the position of
the PSS straight line is similar to the sensitivity to the reservoir porosity or the compressibility.
Fig. 7.B.18 – Effect of the reservoir thickness, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p241/558
Fig. 2.H.22 figure below illustrates the response with a variable rate for each simulation.
Values for q.B are 600, 800, 1000, 1200 and 1400 rb/d.
The result of varying q.B corresponds to a straight multiplication of the pressure change from
pi. The loglog response is shifted vertically, and the semilog and history plots are vertically
compressed or expanded, the fixed point being the initial pressure.
Fig. 7.B.20 – Effect of the rate– q.B, semilog and history plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p242/558
The double-porosity model is described by two other variables in addition to the parameters
defining the homogeneous model: is the storativity ratio, and is essentially the fraction of
fluids stored in the fissure system (e.g. =0.05 means 5%).
Vct f
, storativity ratio:
Vct m Vct f
is the interporosity flow coefficient that characterizes the ability of the matrix blocks to flow
into the fissure system. It is dominated by the matrix/fissures permeability contrast, km/kf.
km
, the interporosity flow parameter: rw2
kf
When the well is first put on production, after any well dominated behavior, the first flow
regime to develop is the fissure system radial flow, i.e. the fissure system is producing as if
this system was there alone, and there is no change in the pressure inside the matrix blocks.
This first flow regime is typically over very quickly, and is frequently masked by wellbore
storage. If not, it will develop as an IARF response and the pressure derivative will stabilize
horizontally.
Once the fissure system has started to produce, a pressure differential is established between
the matrix blocks and the fissures. The matrix is still at initial pressure pi, and the fissure
system has a pressure pwf at the wellbore, the matrix blocks then start to produce into the
fissure system, effectively providing pressure support, and the drawdown briefly slows down as
this extra energy tends to stabilize the pressure, thus a transitional dip in the derivative is
created.
The total system radial flow (IARF) is established when any pressure differential between the
matrix blocks and the fissure system is no longer significant, and the equivalent homogeneous
radial flow response is observed. A second IARF stabilization in the pressure derivative is
therefore developed after the transitional dip, called by some the derivative valley. According
to the mathematics, this takes place when the pressure inside the matrix blocks is the same as
in the fissure system however; this can never be true at all points in the reservoir, as there
would be no production into the fissure system.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p243/558
Fig. 7.C.1 – Fissure system production Fig. 7.C.2 – Total system production
In this case it is assumed that the pressure distribution in the matrix blocks is uniform, i.e.
there is no pressure drop inside the matrix blocks. A physical explanation for this might be that
the matrix blocks are small, so that any pressure drop inside them is insignificant compared to
the pressure diffusion in the reservoir away from the wellbore. The entire pressure drop takes
place at the surface of the blocks as a discontinuity, and the resulting pressure response gives
a sharp dip during the transition.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p244/558
is the fraction of interconnected pore volume occupied by the fissures. It determines the
depth of the dip. For small values, corresponding to a very high proportion of the
hydrocarbon stored in the matrix system, the support during the transition is substantial, and
the dip is deeper and longer. The figure below illustrates the influence of the value of .
describes the ability of the matrix to flow to the fissures, and is a function of the matrix block
size and permeability. It determines the time of start of transition and controls the speed at
which the matrix will react, therefore the total time of the transition. For a high , the matrix
permeability is comparatively high, so it will start to give up its fluid almost as soon as the
fissure system starts to produce. Conversely a low means a very tight matrix, and more
drawdown will have to be established in the fissured system before the matrix blocks will
appreciably give up any fluid, and the transition starts later? This is illustrated in the following
figure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p245/558
This model assumes that there is a pressure gradient and therefore diffusivity, within the
matrix blocks. If the pressure profile inside the blocks is significant, then the shape of the
blocks has to be taken into consideration, and for this reason there are 2 models available,
each corresponding to different matrix block geometries.
The ‘slab’ geometry model assumes rectangular matrix blocks, which is what we have been
considering so far with the double-porosity PSS models. The ‘spheres’ model, physically
realistic or not, represents another simple geometry with which to define the boundary
conditions for the mathematical solution. It is difficult to visualize a reservoir consisting of
spherical matrix blocks, but perhaps due to fluid movements over geological time the fissure
network can become ‘vuggy’ and the edges of the matrix blocks rounded. Double-porosity data
sets sometimes match the ‘spheres’ model better than any other. As before, our mathematical
models may not be an accurate representation of what nature has provided in the reservoir,
but the performance from these models is very close to the measured pressures from these
wells. Below is a loglog plot of a buildup in a double porosity transient behavior reservoir.
Unfortunately the test stopped in the middle of the transition.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p246/558
The following figure illustrates the difference between the ‘slab’ and ‘sphere’ blocks, the
difference is small.
As shown in the following plots, the fissure system radial flow is very short-lived, and in
practice is not seen. At the deepest point in the transition, the semi-log slope/derivative value
is half of the total system radial flow value. in this model has a more subtle effect on the
shape of the derivative, and defines the time at which the response transitions to total
system IARF.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p247/558
If seen, the two lines would each correspond to kfh, radial flow in the fissure system, as in the
first case only the fissure system is producing. In the second case, although the total system is
producing, any pressure differential between the matrix blocks and the fissure system is now
negligible, and the only pressure drop in the system is in the fissures, as fluids flow to the
wellbore. Imagine a droplet of oil in a matrix block 50 meters from the wellbore; it lazily
travels a few centimeters to enter the fissure system, expelled by a negligible p, and then
travels 50 meters through the fissure network, accelerating as it approaches the wellbore as
the pressure gradient increases and flow area decreases. It is this pressure gradient, in the
fissure system that creates the measured wellbore response.
In case the two straight lines are seen in the response, semilog specialized analysis can also
yield information about and . is evaluated using the vertical separation of the two straight
lines;
p
10 m
and is evaluated using the time of the middle of the straight line through the transition;
1
lnct rw
2
0.000264k t
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p248/558
An example of this type of analysis, easily performed using Saphir, is illustrated below.
Wellbore storage will invariably mask the fissure response in the double porosity reservoir. The
transition can thereby easily be misdiagnosed and the whole interpretation effort can be
jeopardized. However, there are some lifelines that can save the day.
Typically there will be a negative skin associated with the double porosity reservoir where no
negative skin was expected. The wellbore storage constant also tend to show a value which is
unusually high and reflects the increase in the wellbore volume that the direct communication
to the fissures and fractures sets up.
Illustrated below is a plot that demonstrates how the early time fissure response and transition
is affected by an increasing wellbore storage coefficient. At higher wellbore storage coefficients
even the whole transition period may be lost.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p249/558
7.C.5 Skin
Positive skin has no influence on the derivative and hence no influence on the transition thus
no diagnostic feature is lost.
Highly negative skin will distort the early time response of both the derivative and the pressure
change, the response approaches that of an infinite conductivity and linear flow can be seen to
develop before the fissure system is in infinite acting radial flow.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p250/558
7.D.2 Hypothesis
To extend the double porosity PSS solution one can consider matrix blocks of different sizes. In
the following case we will considers only two different matrix blocks.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p251/558
is still defined as the fraction of the interconnected pore volume occupied by the fissures.
Each series of matrix blocks has its own value of (i.e. 1 and 2), corresponding to different
transition times, and each subtype of blocks will occupy a different fraction of the total matrix
pore space. We will define 1 the fraction of the matrix pore space occupied by the first series
of blocks with respect to the total block storativity:
(Vct )1
1 and 2 1 1
Vct 1 2
An extension to the double porosity transient flow model is to add skin to the matrix face. The
notion of spheres and slabs is still valid and and have the same definitions as before. The
below figure illustrates the schematic of the model.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p252/558
For a constant value, the smaller the 1 value, the smaller the first dip and the greater the
second.
Although this geological setting can easily be observed in real life the parameters governing
this model is usually such that the two distinct transitions and valleys are not observed, in
most cases only one transition valley is seen.
The following figure illustrates that adding skin to the matrix face the transient solution
approaches that of the PSS double porosity response.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p253/558
The transient model with matrix skin will only be affected by significant wellbore storage which
is illustrated by the following figure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p254/558
The below figure illustrates the model behavior with different skin values for the ‘triple
porosity’ PSS reservoir.
Following we show the influence of the model skin (the skin acting between the well and the
fissures) on the transient model with a constant matrix skin of 2 (the skin acting between the
matrix blocks and the fissures).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p255/558
In the double-permeability (2K) analytical model assumption the reservoir consists of two
layers of different permeabilities, each of which may be perforated (contributing) or not.
Crossflow between the layers is proportional to the pressure difference between them.
Making the comparison with the double-porosity PSS model, and have equivalent
meanings. Adding one more leading parameter will describe the analytical model:
Vct 1 kh1
rw2
Vct 1 Vct 2 kh1 kh2
, layer storativity ratio, is the fraction of interconnected pore volume occupied by layer 1, and
, inter-layer flow parameter, describes the ability of flow between the layers. In addition
another coefficient is introduced: is the ratio of the permeability-thickness product of the first
layer to the total of both:
kh1
kh1 kh2
Generally the high permeability layer is considered as layer 1, so will be close to 1. At early
time there is no pressure difference between the layers and the system behaves as 2
comingled homogeneous layers without crossflow. As the most permeable layer produces more
rapidly than the less permeable layer, a pressure difference develops between the layers and
crossflow begins to occur. Usually the semi permeable wall hypothesis is applied and is
dependent upon the thickness of the wall, its vertical permeability and the individual vertical
permeability of the layers:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p256/558
rw2 2
'
kh h h h
2 ' 1 2
k z k z1 k z 2
Assuming that the vertical permeability in the layers is the same, if no semi permeable wall or
skin is present then:
h
k z kh
rw2 2
Eventually the system behaves again as a homogeneous reservoir, with the total kh and
storativity of the 2 layers.
A transitional dip is governed by and , which have the same effect as in the double porosity
models, and , which reduces the depth of the dip as decreases. If =1 then k2=0 and the
properties of the low permeability layer is equivalent to that of the matrix blocks of a double
porosity system and can only be produced by cross flowing to the high-permeability layer,
equivalent to the fissure system in the double porosity model PSS.
The following figures illustrate the sensitivity to the parameters , and . Both layers are
perforated and can produce to the well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p257/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p258/558
7.E.4 Skin
Varying the skin in layer 2 has little or no impact on the model behavior as illustrated below.
However varying the skin in the high permeability layer sets up a totally different response and
describes a different well configuration by using a reservoir model. The well could be
perforated in the low permeability layer only or the high permeability layer could have been
plugged at the well inducing a considerable skin. In this way the high permeability layer can
only contribute to the production through reservoir cross flow to the lower permeability layer.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p259/558
This is a scenario similar to that of limited entry or partial penetration, but in this case
spherical flow will not quite develop. This well and reservoir configuration can easily be
analyzed using the double permeability model by increasing the skin in layer 1 to simulate the
high skin, plugging or non perforation of this layer. The below figure illustrates the behavior of
the model as the skin in the high permeability layer increases.
Fig. 7.E.9 – Classic double permeability Fig. 7.E.10 – Double permeability layer 1
damaged
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p260/558
The two layer model can be applied to many cases in the measure that it is frequently possible
to split a multilayer formation into two packs of layers and treat this as an equivalent two layer
system. However, this may become an oversimplification and the use of more layers in the
model can be necessary.
The same principle used in the double permeability solution can be directly extended to more
than two layers.
The parameters already defined for the two layer case are extended to layer i:
i, layer storativity ratio, is the fraction of interconnected pore volume occupied by layer (i)
with respecti+1 to the total pore volume
Vct i
i
Vct j
j
i, inter-layer flow parameter, describes the ability of flow between the layer i and i+1:
i rw2
khi
khi khi 1
i is the ratio of the permeability-thickness product of the layer i to the total layers kh:
i
khi
kh
j
j
In this type of multi-permeability system one can expect to see as many heterogeneous
responses as we have dual systems. This is as long as the parameters governing the
heterogeneity are sufficiently different to not masked one another.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p261/558
The below figure illustrates the behavior of a three layered model with cross flow. At early
time, the layers are producing independently and the behavior corresponds to three layers
producing without crossflow. When the first interlayer crossflow starts (interface with the
greater value), a transition period is observed, shown by an inflection in the pressure
response and a valley in the derivative.
After this first transition, the derivative comes back to its IARF stabilization and then exhibits a
second transition. After all the transitions the reservoir acts as a homogeneous medium with
total kh and storativity.
In the example only one parameter 2, changes the overall shape of the response as an
increase in 2 implies a decrease in 3, therefore an inversion of contrast.
3 1 1 2 and 3 1 1 2
The three layered model demonstrated here is part of the Saphir external model library. In this
library also exists a 4 layered model with cross flow.
(h1 h2 )
kz
h1 h2
k z1 k z 2
And the relationship with is:
h
k z kh
rw2 2
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p262/558
The leakage factor has a value between 1 (full leakage) and 0 (no cross flow).
The use of numerical models to simulate complex layered formations is detailed in the chapter
‘Multilayered models’.
Each layer can produce to the well or be completely blocked off thus only producing through
cross flow.
The below figures illustrates the response of the numerical simulator with two layers with
various leakage factors and the response when the high permeable layer has been selectively
shut off (this is a feature of the numerical multilayered model, one can selectively shut off or
open individual layers for flow to the well).
Fig. 7.F.4 – Numerical two layers, high permeability layer shut off
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p263/558
If the layers are homogenous and infinite and even if the initial pressures of each layer is
different we will not be able to differentiate the behavior of each layer in the common pressure
response measure in the wellbore. The response measured at the pressure gauge is simply the
global response.
Now, any text book with some respect for itself will invariably present a pressure response that
wonderfully show the behavior of at least two and sometimes several layers. And this is
possible if one or several of the layers are bounded. The boundaries are described in more
detail in the chapter on ‘Boundary models’, however we show below the response of a two
layered system where one layer is infinite and the other layer is just a small sand lens. And,
yes, the first stabilization of the derivative corresponds to the total kh, the second level
corresponds to the permeability thickness product of the infinite layer. The unit slope response
in the middle of this buildup reflects the bounded layer and the distance (to the boundaries)
can be deduced. This type of behavior can also easily be described by a composite model as it
is a limiting case of the radial composite model, so the reader be aware.
Fig. 7.G.1 – Two layers, one layer bounded the other infinite
To reiterate on the above observations, the behavior of any multilayered system as long as
each layer is homogeneous and infinite acting, the model response will be an equivalent global
response with early time wellbore storage and a global stabilization at the level of total kh;
n
(kh)total ki hi
i 1
n
Si ki hi
With total skin ST
i 1 ( kh)total
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p264/558
7.G.1 Hypothesis
The reservoir has n layers; each layer can have a well model as the well models already
described in the chapter on ‘Well models’. The reservoir model of each layer can be any of the
reservoir models already presented in this chapter and each layer can have boundary
configurations as those presented in the chapter on ‘Boundary models’. The initial pressure Pi
can be different from layer to layer.
The analytical model does not allow any cross flow between the layers in the reservoir thus the
production to the well is comingled. Cross flow between layers is allowed in well.
The total behavior of each model will be a superposition of each layer behavior, thus it is far
too complex to generalize the model behavior in a similar manner compared to that done with
the other reservoir models described elsewhere in this chapter.
The concept of pattern recognition and the identification of the different flowregimes during a
flow period to be analyzed is no longer a straight forward task. The notion of solving the
‘inverse’ problem is no longer an issue. In order for the engineer to come up with a plausible
answer to the problem he has to rely more heavily on other results and measurements done in
the well. Open hole logging, cores and seismic are some of the data that will and has to
influence the model choice and the final interpretation.
And finally, the layer contributions; this can be the individual rate of each layer or a
combination of rates at different stations in the wellbore. Without the layer rates the engineer
is faced with a task he can rarely win.
But we are the bearer of good news; the multilayer model can include the individual layer
rates or a combination of different layer contributions. And the best news is that the layer
rates or transient layer rates can be taken into account when optimizing the solution with non
linear regression. In fact the only way to give a solution a proper chance is to include the layer
rates in the object function thus optimizing with the layer rates becomes a part of the method
for solving the multilayer transient test problem.
So, the bad news is that without production logging, PTA with the objective to assign individual
reservoir characteristics and skin to each layer is virtually impossible with any confidence. Not
knowing the layer contributions will open a ‘Pandora box’ that is so full of different models and
leading parameters of which any combination can produce the same response, that it may
become impossible to make a reasonable choice.
In fact the problem is expanded to what we are often up against with even simpler models; the
solution is very far from unique.
As already stated the model response will be a superposition of the different well and
reservoirs models assigned to each layer thus it is impossible to describe this in any detail over
a few pages. The model response of the various well models is covered in the chapter on ‘Well
models’, the reservoir responses are covered in this chapter, and the boundary responses are
covered in the chapter on ‘Boundary models’.
We will however, in the following section, describe how we build a multilayer model in the PTA
(Saphir) or PA (Topaze) KAPPA suite of programs.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p265/558
The layer rates will have to be loaded either as a compatible file, from Emeraude or by hand.
The rate is usually measured during production logging and can be transient stationary rates
versus time or is the result of a production log versus depth interpretation, such as provided
using Emeraude.
The next step is to define the layer’s well, reservoir and boundary model and the layer
characteristics.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p266/558
Generate the model and adjust the parameters by hand or run the improve option. To add the
layer rates in the objective function of the nonlinear regression the optimization is carried out
on the history (simulation) plot.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p267/558
A numerical multilayer model is also available. The numerical model is more flexible and allows
cross flow in the reservoir. This is defined either as a leakage factor, an equivalent vertical
permeability or the classical lambda coefficient of a double permeability reservoir. Each layer
can also be specified as perforated (communicating with the well) or not.
There are various multilayer tests subject to pressure transient analysis; all of the tests involve
in a form or another, the use of production logging tools to capture the production of the
layers. These tests and their analysis approach is cover in the chapter on ‘PTA – Special test
operations’.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p268/558
The most common cases where one can observe a change in mobility in the reservoir area are:
Compartmentalization
Facies changes
The analytical solutions which model these cases are called composite models. Their geometry
is quite straightforward and they are governed by two simple parameters.
The most common analytical composite models are the radial and the linear composite. The
radial composite geometry is centered at the well, and ri is the radius of the inner
compartment.
Fig. 7.H.1 – Radial composite reservoir Fig. 7.H.2 – Linear composite reservoir
For the linear composite reservoir (of infinite extent), the corresponding parameter will be L i,
the distance from the well to the mobility change.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p269/558
When one reference is chosen, the properties of the other compartment are calculated from
the first using two parameters:
We see that the ratio M/D represents the compressibility ratio which is often taken, as a first
approximation, equal to 1 when both fluids are of the same.
Below we illustrate the pressure profile for a radial and a linear composite reservoir. At the
composite interface, there is no pressure loss but a change in the pressure gradient. The flux
on both sides is the same, but because the mobility is different Darcy’s law will give two
different pressure gradients.
At early time, the pressure will diffuse in compartment 1 only, and the behavior will be
homogeneous. When the composite limit is detected, there will be a change of apparent
mobility and diffusivity.
Fig. 7.H.3 – Radial composite pressure Fig. 7.H.4 – Linear composite pressure profile
profile
With the radial composite reservoir, the apparent mobility and diffusivity will move from the
inner values (compartment 1) to the outer values (compartment 2), the final mobility will be
that or compartment 2. For the linear composite reservoir, after the transition, the final
apparent mobility and diffusivity will be the average of compartments 1 and 2.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p270/558
The following figures show the loglog response of a radial composite reservoir with constant
distance to the interface and several values of M=D. The time at which the derivative deviates
from the initial IARF (compartment 1) to the final IARF (compartment 2) is linked to r i with the
same relation as for a sealing or constant pressure boundary. The ratio between the final and
initial derivative level will be that of the ratio between the initial and the final mobility, equal to
M. When M=D=1, the response is obviously homogeneous.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p271/558
Below is the loglog responses for the same parameters, but now in a linear composite
reservoir. The final stabilization corresponding to the average of compartments 1 and 2, the
transition will be smoother, and it is easy to show that the ratio between the final and initial
derivative level will be 2M/(M+1).
When M tends to infinity, i.e. the permeability of the outer compartment tends to zero; this
ratio will tend to 2. This corresponds to the particular case of a sealing fault. When M tends to
zero, the permeability of the outer compartment tends to infinity and the pressure will be
sustained at initial pressure at the boundary. This is the particular case of a constant pressure
linear boundary.
To illustrate the behavior of the model when mobility and diffusivity is no longer equal we have
generated the radial composite model under different scenarios.
Following is a figure where the mobility M is kept constant and varying the diffusivity ration D.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p272/558
Depending on the value of D, the derivative will deviate upwards (D<1) or downwards (D>1).
The response with a valley is qualitatively close to a double-porosity behavior. The difference
in shape comes from the fact that this change of storativity occurs in compartment 2 only,
while in the double-porosity model it occurs everywhere.
In the general case where M and D are different, the derivative will go from the IARF level of
the inner zone to the final level being a function of M only. There will be an upwards transition
when D<M and a downwards transition when D>M.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p273/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p274/558
7.H.5 Skin
The skin can be evaluated using the classical semilog methods. The ‘inner zone’ radial flow will
give the well skin and the ‘outer zone’ radial flow will return a ‘total skin’. This ‘total skin’ is
defined by the two components:
1 1 r
ST S(inner zone) 1 ln i
M M rw
When M<1 we say that the mobility increases at some point further away from the well and
often we consider that the inner zone as being damaged by invasion or other causes. This
gives rise to the concept of a skin with ‘dimensions’. The literature contains many examples of
this behavior, whether due to actual damage or some other phenomena such as changes in
fluids or geological facies. What the literature is missing is actual proof that remedial action
can in fact connect the better mobility reservoir directly to the well thus bypassing the ‘block’
and increase the well performance. See the section in this chapter on “When should we use a
composite model?”.
The following figure illustrates the model behavior with a changing well skin with M<1. A
positive skin has no influence on the Bourdet derivative. A negative skin will affect both the
derivative and the pressure change as the early time behavior approached the behavior of a
fracture.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p275/558
In some cases these assumptions may be too simplistic; e. g. when a producing well is
surrounded by multiple fluid annuli, due to particular PVT characteristics creating gas blocking
or condensate banks. There may be more than one distinct change in the facies properties. In
these cases several condition changes can take place close to the well and will require a
multiple radial change model.
Fig. 7.H.14 – 3 zone radial composite Fig. 7.H.15 – 4 zone radial composite
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p276/558
The parameters Mi and Di in the model have respectively the same definition as in the simple
model described above. The below figure illustrates the response where a four zone radial
composite model has been used.
Now the question is, are these models really useful? Jumping forward and bearing in mind the
paragraph on ‘When should we use a composite model?’ in this chapter, the answer is yes, but
not in the hands of a novice interpreter. These models are certainly not to be used as a last
resort just to obtain a match for the sake of making the match only and ignoring the physical
implications that comes with the description of the model.
The obvious draw back that can make its use dubious is the fact that strictly speaking
everything has to be perfectly radial which in real life is clearly not the case. However, we have
to understand that when we use such models, and any analytical models for that matter, we
are looking for an equivalent behavior. These models are particularly useful to describe
changes in fluid banks close to the well. Production can be close to or below the bubble point
pressure in a bubble point fluid and the gas saturation can vary radially away from the well.
Thus there can be zones with both movable and immovable gas governed by the critical
saturation. This can give rise to various zones of different mobilities and even gas blocking.
In a dew point type of fluid, different saturation fluid banks may build up around the well and
again give rise to a variation of the mobility.
The physical properties may very well change and even gradually change as the distance
increases from the well, thus a composite behavior is not only possible but common.
The below figures illustrates the changing mobility of a ‘gas block’ matched with a multiple
zoned radial composite analytical model. The other figure illustrates the match of a response
cause by condensate ‘banking’ around the well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p277/558
Following we show a typical numerical model with an unstructured Voronoi grid. The colors
indicate composite zones where the mobility ratio M, and diffusivity ratio D can be defined. The
loglog response is also illustrated.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p278/558
The use of the radial composite model can always be criticized as it is always highly unlikely
that the well has been drilled smack in the middle of a circular reservoir.
Composite models only make sense if all parameters such as the location of the interface,
mobility change and diffusivity change are within acceptable ranges and can be explained. The
decision to use these models is the knowledge of the actual real conditions where composite
responses are expected, not that the data cannot be matched with other less flexible models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p279/558
During infinite acting radial flow (IARF) the slope of the semilog plot and the level of the
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p280/558
When the reservoir system is bounded, the time to ‘see’ a fault depends upon the directional
permeability. In the below example the one single fault that is simulated, is by default parallel
to the x direction in the reservoir (this default can be modified in the model dialog). An
increase in the y direction permeability (decrease of the ration k x/ky) will decrease the ‘time’
necessary to ‘see’ the fault.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p281/558
With respect to the KAPPA software suite, some model combinations were implemented as
external DLL’s that may be connected to the applications on a needed basis. The reasons that
they were not made a part of the standard model catalog are summarized below.
These are delicate models to use. They have so many leading parameters that, without
paying attention to the physics it is possible to force a match to do anything. Delivering the
solutions as external models allows client companies to control their distribution.
The solutions are more complex and more rarely used and tested. They are therefore less
stable than simpler models that are part of the built-in capabilities of the applications.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p282/558
These models are only relevant if we know beforehand, from the petrophysics, that part or all
of the layered and composite system is naturally fractured and fissured.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 7 – Reservoir models - p283/558
Such models should never be used just for the sake of matching any strange looking response.
There must be some evidence, some knowledge of the formation and fluids that would justify
such a choice. One should also not forget that there are far too many leading parameters in
the models so the concept of a solution to the ‘inverse’ problem is lost.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p285/558
8 – Boundaries
OH – DV – OSF
8.A Introduction
In most well tests, and with most models used in pressure transient analysis, the first part of
the pressure response is dominated by wellbore effects and flow diffusion immediately around
the well. If the well is not fully penetrating the reservoir or not vertical, the early time
response is also affected by vertical flow from the top and bottom parts of the producing
interval. Then in most cases, but not always, the middle and/or late time pressure response
will be dominated by Infinite Acting Radial Flow (IARF), characterized by a stabilization of the
Bourdet derivative, where the average reservoir mobility (k/) and the global well productivity,
total apparent Skin, may be assessed. In many well tests, the analysis will stop there and IARF
will be the final detected behavior.
But, should the reservoir be small enough, and should the test be long enough boundary
effects will be encountered during the test. This encounter may be accidental, deliberate, as in
reservoir limit testing, or inevitable in the case of long term production data.
This chapter covers the different types of boundaries, their respective pressure and derivative
behaviors and corresponding analysis methods. It also shows how apparent boundary effects
may be, in fact, be something else.
We will only consider boundaries that produce deviations from IARF, and we will not consider
the vertical limits of the producing interval. Physically this is a questionable option, as these
are boundaries too but for our methodology it makes sense. Upper and lower boundaries will
generally be considered in well models involving vertical flow, such as limited entry and
horizontal wells, where the early time response will involve a vertical diffusion until these limits
are reached. Paradoxically the analyses involving top and bottom boundaries is associated to
well models and developed in the corresponding chapter.
We have also excluded composite limits. Though their detection and analysis process are
similar to what is done for boundaries, they will be assimilated to reservoir heterogeneities,
and treated in the chapter dedicated to reservoir models.
Finally we will also look at the way recent developments on the deconvolution allow boundaries
to be assessed while not being detected in any individual build-ups.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p286/558
The figure above does not represent a ‘wave’ but the points at which the pressure drop
reaches a given value (e.g. 1 psi) at different times (1, 2, 3, 4). The red circles represent the
influence of the well production itself if it was producing in an infinite reservoir. The blue circles
represent the additional pressure drop due to the boundary at the same times.
This requires a physical explanation: The well production creates a pressure drop around the
well that diffuses within the reservoir. As long as the boundary influence is negligible the
diffusion will be radial and the ‘radius of investigation’ (red circles) will be proportional to the
square root of time.
When a boundary is present, there will be no pressure support beyond the boundary, and
there will be an additional pressure drop compared to an infinite configuration. This pressure
drop (blue circles) will affect the pressure profile and will also diffuse.
At a time the amplitude of this additional pressure drop will be picked up by the gauge in the
well, and the boundary is detected. This will only occur if the test is long enough and the
gauge is sufficiently sensitive to pick up the signal.
The pressure derivative deviates from IARF at the time when the influence of the closest
boundary becomes noticeable. The derivative then takes a shape that will depend on the type
and shape of the boundary, the flow period of interest, if it is a flow or shut-in, and in some
cases the well production history.
We will consider four types of boundaries and their typical behavior, alone or combined with
other boundaries. These are no flow, constant pressure, leaky and conductive.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p287/558
p
n 0
This equation means that the pressure profile is flat when arriving orthogonally to the
boundary. The vertical cross-section below shows the pressure profile from the well to the
boundary.
The figure below is a 3D display of the pressure profile due to a well producing near a no-flow
boundary. We are representing here a 2D problem, and the z axis represents the pressure. The
pressure at the boundary is not uniform. The pressure change is larger at the point of the
boundary than it is the closest to the well. But looking at every line orthogonal to the boundary
it becomes flat at the boundary. This is simply Darcy’s law for no flow in one direction.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p288/558
p pi
The figure below shows a vertical cross-section of the pressure profile from the well to the
boundary. The slope at the boundary will correspond to the fluid flux required to keep the
pressure constant.
8.B.4 Aquifers
The constant pressure boundary above is the only pressure support model which can be easily
modeled analytically, using the method of image wells. This implies that the pressure support
is very strong and that multiphase flow effects can be neglected. This approximation works
generally quite well for gas caps. In the case of water drives such approximation may not
work, and aquifer models may be used.
Aquifers are generally modeled analytically. They require a choice of model, that will define the
aquifer strength, and a relatively permeability table to model the sweeping of the hydrocarbon
by the water phase.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p289/558
It solves for the pressure behavior at a well near a non-intersecting finite conductivity fault or
fracture. The solution includes an altered zone around the fault across which it is possible to
add Skin. The reservoir properties on either side of the fault can different.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p290/558
In the case of more complex well models (fractures, horizontal wells, etc), to ensure that the
model is valid, one has to add an image well of the same geometry, the position being the
symmetric to the real well. To enforce symmetry, wellbore storage must also be added. In
some cases however it will be possible to get a good approximation of the response using a
simple line source solution with no wellbore storage.
8.C.1 Behavior
Before the additional pressure dropped due to the boundary is picked up by the pressure
gauge, the system behaves as if the reservoir was of infinite extent in all directions.
When the boundary is detected, the response deviates from infinite acting radial flow until it
doubles. If one considers the physical reservoir, the well has received the information that the
reservoir is actually twice smaller than in the infinite case. After the sealing fault is detected,
only half of the originally planned reservoir is now available, and therefore the speed of the
pressure drop doubles. If one considers the equivalent image well, after the detection there
are two wells producing instead of one, and for the same reason we have a doubling of the
speed of the pressure drop.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p291/558
ktint
L 0.01217
ct
Boundary distance:
As in the infinite case, the permeability and skin will be given using the slope of the IARF line.
The time of intercept between the IARF line and the half radial flow line will give the boundary
distance using the equation above.
The behavior is the same as for drawdowns, with an initial IARF line, a second line with twice
the slope for half radial flow, and a boundary distance related to the time of intercept of these
two straight lines. The only difference is that, in the case of shut-ins the intercept of the half
radial flow line (and NOT the IARF line) will be the one used to calculate p*.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p292/558
The easiest technique to match such response is to first focus on the early and middle part of
the data, matching this par with an infinite model. This may include an initial nonlinear
regression on all the points before the deviation from IARF. In the case of a vertical well in a
homogeneous reservoir this will fix C, kh and Skin.
In a second stage the boundary effect is added, and the first estimation of the distance will be
obtained from the semilog analysis or, preferably, an interactive Saphir / Topaze feature where
the time of deviation is picked by the user. After initial simulation the parameters, especially
the boundary distance, will be improved from nonlinear regression, this time on all data points.
In case of poor data quality, if the nonlinear regression fails it is possible to correct the
boundary manually using the following rule of thumb: If we multiply the boundary distance by
2 we multiply the boundary time by 4. For small corrections, adding X% to the boundary
distance will add 2*X% to the boundary time.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p293/558
If the boundary is very close to the well (e.g. 100 ft in the example below), IARF may not
develop before the fault is detected. For very nearby boundaries the pressure response could
look like a homogeneous infinite response with an apparent kh equal to half the true kh.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p294/558
The single fault model will be used when the slope is less than double because it is just the
simplest boundary model that allows the association between a time and a distance. If we see
a deviation from IARF, and IF we think this deviation is due to a boundary, the sealing fault
model will provide a good estimate of the distance from the well to such boundary. From this
distance we will decide if the hypothesis of a boundary is reasonable.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p295/558
The well is located between two intersecting linear boundaries of infinite extent. is the angle
between the faults, L1 and L2 are the orthogonal distances between the well and the two
faults. A particular case is when the well is located on the bisector of the faults (L1=L2).
8.D.2 Behavior
If the well is significantly closer to one of the boundaries (point A), the initial behavior is the
same as for a single sealing fault. When the second fault is detected, the response enters its
‘final’ behavior. If the well is fairly equidistant from the two faults, the response goes straight
from IARF into the ‘final’ behavior.
The ‘final’ behavior is semi-radial flow restricted to the quadrant delimited by the two faults. If
is the angle between the two faults, the actual reservoir size is smaller than an infinite
reservoir by a factor of 2/; hence the pressure drop is 2/larger to produce the same fluid
to the well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p296/558
We use the term ‘final’ in parentheses as, clearly, there would be a time when further
boundaries would be detected if the test was continued.
In most of the cases the second straight line caused by the closest limit is very brief and
masked by the effect of the second one.
As in the infinite case, the permeability and skin will be given using the slope of the IARF line.
The time of intercept between the IARF line and the half radial flow line will give the boundary
distance using the equation above.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p297/558
The behavior is the same as for drawdowns, with an initial IARF line, a second line with twice
the slope for the first limit could be briefly seen then, when the second limit effect appears a
third straight line called ‘final radial flow’ and the slope of this last line is 2/the slope of the
original IARF line. The only difference is that, in the case of shut-ins the intercept of the ‘final
radial flow’ line (and NOT the IARF line) will be the one used to calculate p*.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p298/558
When intersecting faults are generated using image wells the interpreter has to choose the
angle only within a discrete range of values, corresponding to integer fraction of or 2.
Image wells are very easy to implement and very fast to calculate on a computer or a hand-
calculator. But geometrically they require the angle to be an integer fraction of 2 if the well is
situated on the bisector (well B), and an integer fraction of if the well is located anywhere
else (well A). The major limitation is that we will generally not match exactly the final
stabilization level, and will not be accessible as a nonlinear regression parameter.
Conversely, methods using integral solutions will simulate the response for any angle at a
higher, but now acceptable, CPU cost. These models will even allow to be higher than , and
go up to 2. We then end up with a half fault that may be described later as an incomplete
boundary. Integral solutions also allow regression on
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p299/558
8.E.2 Behavior
If the well is significantly closer to one of the boundaries (well A), the initial behavior will be
the same as a single sealing fault response. When the second fault is detected, the response
will enter its ‘final’ behavior. If the well is at fairly equidistant from the two faults (well B), the
response will go straight from IARF into the final behavior
The final behavior is linear flow along the channel. Again we should refer to the term ‘final’ in
parentheses as, obviously, there will come a time when the last outer boundaries will be
detected if we wait long enough.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p300/558
Linear flow, or more generally the flow of fluid though a constant cross-section, is encountered
in several well test models, such as fractures at early time after storage, horizontal wells at
intermediate times, after the top and lower boundaries have been reached (see ‘Well models’
chapter), and parallel faults at late time. In each case, the flow is characterized by linearity
between the pressure change and the square root of the elapsed time:
p A t B
p 1
p t. A t
t 2
Fig. 8.E.3 – MDH plot for parallel faults – Fig. 8.E.4 – Horner plot for parallel faults –
point B point B
In case of build up the adequate plot would be the Horner plot or a Superposition plot, but in
no case it could be used to extrapolate to a p* since the final pressure behavior do not reveal
any straight line on a semilog plot.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p301/558
The behavior above is only strictly valid for production. In the case of a build-up or complex
production in theory, nothing guarantees that a specific behavior will occur on the loglog plot.
In reality, the loglog plot is also usable for shut-ins.
Strictly speaking, only the derivative of the production period will follow a half slope. For any
practical purpose, especially considering the noise of real data, all flow periods will show a
derivative with a slope oscillating around 0.5. The pressure response itself will bend down
during build-ups and will not show any specific behavior.
Practically, the interpretation engineer will require the program to generate the pressure
response from the usual relation between mobility, time of departure from IARF and boundary
distance. The model will be generated taking into account the well production. Any deviation
from the strict half slope will also be simulated by the model. Then a nonlinear regression on
the boundary distances will correct any initial error in the estimate of these boundary distances
due to this superposition effect.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p302/558
For shut-ins and complex production histories, fracture, horizontal well and parallel faults will
diverge in the choice of the superposition function. As the linear flow is the ‘final’ behavior of
the channel response, if the considered shut-in has reached linear flow, all component
functions used in the superposition will be in linear flow, and the time scale to use is the
multirate superposition of the square-root.
For a simple build-up following a unique production, the tandem square root function
[√(tp+t)-√(t)] will be used. For more complex production, the superposition of the square
root function will apply. For a shut-in, the extrapolated pressure corresponding to infinite shut-
in time will be the right estimate of p*.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p303/558
The early time transient will depend on the distance between the well and the now three
sealing faults. This will not be discussed in detail here, and we will only consider the case of a
point at equidistant from the three boundaries. The solution is compared with the parallel fault
solution, i.e. in the absence of the third boundary.
The two responses are similar and the U-shape reservoir produces a late time half slope.
Compared to the parallel fault solution, the final behavior is shifted upward by a factor of two
on the loglog plot. This shift is coherent with the fact that the U-shape reservoir is half the size
of the corresponding complete channel.
Specialized analysis will also provide an estimate of k.L². The software will need to know that it
is a U-shape reservoir, in order to apply the correction by two in the line calculation.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p304/558
Even if the reality is not strictly two linear parallel faults but non regular limits close to to be
parallel, this model gives a very satisfactory approach before going to more sophisiticated
models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p305/558
Conversely, a closed system will be modeled when the test is long enough, or the reservoir is
small enough, to detect the whole extent of the reservoir. This will be characterized by at least
one of the following behaviors: (1) during the production, we will see linear depletion, (2) at
shut-in the pressure will stabilize at a pressure lower than the initial reservoir pressure.
8.F.1 Description
The most common and easiest way to model closed system is the circular model. It assumes
that the tested well is located at the center of a reservoir of circular shape.
This model is unlikely to reflect the exact reservoir geometry and the well location. However it
is useful and fast when depletion is detected. But the geometry is unknown and the response
does not exhibit any intermediate boundary behavior.
The second most popular closed system model is the rectangular reservoir. Using the principle
of image wells, this solution allows us to define an aspect ratio between the reservoir and the
position of the well at any point in the rectangle. From the well point of view, it means that the
four boundaries may be positioned at any distance. When all four distances are equal, the well
is at the center of a square, which is practically identical in response to a circular solution with
the same area.
Another advantage of this model is that it can easily be extended to multiwell solutions where
contributions of interfering wells can be added. This possibility is critical to properly address
material balance issues.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p306/558
Numerous other models of closed systems have been published and used. Complex shapes can
be modeled using boundary elements or numerical models, and can also be extended to
multiple wells.
8.F.2 Behavior
In the illustrations below, we will show the response of a circular reservoir and a rectangle.
The shape of the rectangle and the position of the well is illustrated in the figure above. Both
rectangle and circle will have the same area.
Unlike the open systems described in the earlier sections, the behavior of closed systems is
radically different between production, or injection, and shut-in periods.
During production, or injection, the pressure information will diffuse and reach boundaries
sequentially. When the last boundary is reached, the pressure profile will stabilize and then
drop uniformly. This particular phase of flow is called Pseudo-Steady State (see chapter
‘Theory - Outer boundaries’). It is characterized by linearity between the pressure drop and the
time.
p A.t B
p
p t. A.t
t
For the circle, the response will go straight from IARF to PSS. For the rectangle, closer
boundaries will be detected beforehand.
When the well is shut-in, there will be a transfer of fluid back to the well zone, until the
pressure stabilizes back to the average reservoir pressure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p307/558
The figures below illustrate the stabilization, the model is closed circular.
Fig. 8.F.3 – Production semilog plot, Fig. 8.F.4 – Build up semilog plot,
circular reservoir circular reservoir
For the shut-in the pressure will stabilize at average reservoir pressure, the derivative will
‘dive’ towards zero. It would not be correct to imagine that because a system is closed, the
shut in derivative will only behave this way and only ‘dive’. For the rectangular model or any
model with closer boundaries, intermediate boundary effects will be detected with the
derivative going up before the final dive.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p308/558
Today the contribution of the shape factor to the pressure drop at PSS is implicitly calculated
by the model.
The slope of a linear plot of the flowing pressure versus time is:
qB
m 3.36.106
Ahct
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p309/558
It is perfectly correct, over the production period, to work as if the drainage area was the limit
of a small reservoir with one well only (left side figure below). This is what average pressure
calculation, AOF and IPR curves are designed for. But the limit of a drainage area is in no way
a hydraulic boundary as those described in this chapter and the analogy ceases as soon as we
start dealing with a build-up. When the well is shut-in (central figure below), the drainage area
will very quickly shrink, taken over by nearby wells. Ultimately another set of drainage areas
will stabilize (right side figure). So a closed system model can NOT be used during a build-up.
We observe on the build-up the shut in effect superposed to the depletion effect due to the
other wells production.
Actually, what is surprising is that most other models do show equivalent derivative behaviors
during productions and shut-ins. We have already seen that it was not strictly true for parallel
faults although close enough. The derivative is calculated with respect to superposition time,
which itself was designed to handle one, and only one behavior; IARF. Calculations were done
for the superposition derivative to stabilize when all components of the production history
reach IARF. The wonderful surprise is that, indeed, the derivative carries the signature of
numerous well, reservoir and boundary models, and in most cases these signatures survive the
superposition process.
However there is no surprise with single faults and intersecting faults, because the long term
behavior is IARF multiplied by a fixed factor. It just happens that, numerically, when the
models are of infinite extent, i.e. the pressure returns to the initial pressure at infinite shut-in
time, the derivative behavior is only marginally affected by the superposition.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p310/558
But this is not the case for closed systems. When PSS is reached, we are no longer in the usual
diffusion process. The reservoir behaves like a depleting tank, and all that survives from the
diffusion is a steady pressure profile. When the well is shut-in, the diffusion process will regain
its territory until, again, all boundaries are reached. Because the reservoir acts like a tank, as
there is no longer any production, the pressure stays constant and uniform and at average
reservoir level. This is why the behaviors are so different. On a large time scale, we have a
straight material balance behavior, only perturbed by transient diffusion effects whenever
rates are changed.
Fig. 8.F.11 – Match with the closed system, production field data
On the shut-in period data the derivative can also show the various limits effects but will drop
down characterizing a stabilized pressure.
Fig. 8.F.12 – Match with the closed system, shut-in field data
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p311/558
Fig. 8.G.1 – Flow profile in a rectangle with three sides sealing and one at constant pressure
In red the original pressure plan, in black the pressure plan after production
The combinations are potentially unlimited, but we will illustrate constant pressure behavior by
the two simplest, pure, constant pressure boundaries using three examples. These are the
linear, ie the constant pressure equivalent of the sealing fault, the circular; the constant
pressure equivalent of the closed circle, and a rectangular model where one side is at constant
pressure. We will use the geometry described below, and consider the west boundary to be at
constant pressure.
8.G.2 Behavior
As soon as the constant pressure boundary is reached, it will provide the necessary volume of
fluid to charge the boundary pressure to its original value to flow into the reservoir. The
pressure will stabilize and the derivative will dip. The speed of this dip will depend on the
boundary geometry.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p312/558
The figures below illustrate this stabilization, the model is constant pressure circular.
Fig. 8.G.3 – MDH for a production period Fig. 8.G.4 – Horner for a Shut in period
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p313/558
There may also be a slight difference in the shape of the response, due to the fact that the
superposition time is used to calculate the derivative. In the case of constant pressure
boundaries, it is preferable to use the drawdown derivative, and not the superposition
derivative, to get a derivative response matching exactly the shape of the drawdown. But the
difference is in, any case, small. And, if the model derivative is calculated the same way as for
the data, either option in the derivative calculation will provide reliable and equivalent results
after nonlinear regression.
In the figure below we see the response of the rectangular model during drawdown, with and
without a constant pressure west boundary. Both responses match until this last boundary is
seen. At this point, the closed system will reach PSS displaying unit slope of the derivative,
while the derivative of the rectangle with a constant pressure boundary will go down and
pressure will stabilize.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p314/558
Now, the same rectangular responses are compared for the shut-in. The closed system
derivative response inverts its behavior and dives quickly, while the mixed boundary response
show a very similar behavior. Again, this may not be strictly true if the production time is very
short.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p315/558
The figure below shows a field example, where the plunging derivative is interpreted as a
constant pressure boundary. The same example as illustrated below can also be matched with
a closed system model.
The loglog plot shows a comparison of this set of data, with two rectangular solutions, one with
only sealing faults, one with combinations of sealing and constant pressure boundaries. Again,
the quality of the match is excellent, and the slight differences alone would not permit us to
discriminate between the choice of models.
There is no answer if all we have is the build-up response. But generally, the uncertainty will
be lifted by looking at the rest of the well test, and in particular the starting pressure. When a
constant pressure boundary is involved, the final shut-in pressure will invariably revert to the
initial pressure.
For a closed system, the final shut-in pressure will be the average reservoir pressure, derived
from the initial pressure with a straight material balance calculation. So the starting point of
the simulation will be very different from one model to the other.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p316/558
The history plot below shows the comparison between the two options on the whole production
history, showing that the constant pressure system model is absolutely not a realistic option,
even if it perfectly matches the build-up response. Unfortunately, the choice is not always that
simple.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p317/558
Below, a leaky fault: the leakage allows some flow to pass through the fault.
Here below, an incomplete fault: the flow will pass around it.
In both cases, it is an obstacle that still allows the part of the reservoir beyond the boundary
to contribute to the flow. The models will be characterized by a deviation from IARF, then a
progressive return of the derivative to the same level.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p318/558
The boundary is defined by a leakage coefficient , also called transmissibility ratio. =0 will
correspond to a sealing fault, =1 will correspond to an infinite reservoir, i.e. no boundary.
The amplitude of the drop, therefore the distance between the two straight lines depend on the
value. A very small value of would make the drop to last longer and the distance between
the lines to increase.
The behavior that we can observe on a Horner or a superposition plot for a build up is similar
and, as mentioned before for other limit effects, the first line is used for the IARF calculation
and it is the second one that is extrapolated to get the P*.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p319/558
It does not mean that when the derivative has returned to IARF the effect of the fault is over.
In fact it is not IARF, and there is a cumulative effect on the pressure response, corresponding
to the stabilized pressure drop through the fault as illustrated below on the isobar map
generated with a numerical model:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p320/558
Fig. 8.H.7 – Isobar map showing the pressure drop at the fault
Each color line corresponds to a certain pressure value but it can also be interpreted as an
investigation or influence zone at a certain time.
The blue represents the early time influence zone, the green to yellow the intermediate time
and the red the late time.
We see on the map that the radial flow goes on the other side of the fault but with a certain
delay. That corresponds to an additional pressure drop at the well compared to the infinite
system.
The presence of the fault influence temporarily the pressure behavior, it creates a deviation
from the IARF but when the pressure signal ‘turns around’ the fault, the complete reservoir
contributes then to flow and the behavior goes back to IARF. It just remains a partial
restriction to the drainage, creating an additional pressure loss.
The time when the fault effect is observed depends on the distance to the fault. Its amplitude
depends on its position: the more the fault is in front of the well, the higher is the amplitude
(case A).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p321/558
The build up behavior is identical. The first line is used for the IARF calculation and the second
one to extrapolate to P*.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p322/558
Below, the isobar maps plotted for a certain time of production illustrates very well the
influence of the position of the incomplete fault on its effect on the pressure behavior.
For the fault B case, the early time and the intermediate time are influenced but the late time
get very close to the classic circle of an Infinite Acting Radial Flow.
In the case C, the early and intermediate time are in infinite acting radial flow and the fault is
hardly influencing the late time isobar lines.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p323/558
This model solves for the pressure behavior at a well near a non-intersecting finite conductivity
fault or fracture. The solution includes an altered zone around the fault with the possibility to
add skin across it. The reservoir properties on either side of the fault can different.
kh and skin of the well side, the distance to the fault or the fracture, the conductivity and skin
of the same and the mobility, M and diffusivity, D ratios defined as the ratio of the well side
parameters divided by the parameters of the reservoir on the other side of the fault or
fracture:
k well side
Mobility ratio: M
k other side
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p324/558
8.I.1 Behavior
After wellbore storage, the next flow regime would be, typically, IARF reflecting the
permeability and skin of the reservoir around the well. This would be followed by the influence
of the fault seen as an increase in mobility, then by a linear flow corresponding to the drainage
of the reservoir by a linear source. When the investigated reservoir area is large enough and
the fracture flow is not anymore dominating a final IARF can be observed
An isobar map, generated after a certain of production can help to visualize the type of
behavior due to a conductive fault.
Each color line corresponds to a certain pressure value but it can also be interpreted as an
investigation or influence zone at a certain time.
The blue represents the early time influence zone, the green to yellow the intermediate time
and the red the late time.
We can observe in blue and green the early IARF before any fault effect.
At early time, on the well side, the radial flow is influenced by the uniform pressure imposed
by the conductive fault while, that create the derivative dip.
On the other side of the fault, the parallel isobar lines characterize a linear flow which
dominates at late time, until a larger zone is interested and a final radial flow is established.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p325/558
This limitation makes the semilog plot of not much interest in this case:
Finally a homogeneous global behavior should be observed with IARF stabilization at a level
governed by the mobility ratio M.
The figure below illustrates the effect of the distance to a low conductivity fault or fracture
(100 mDft). It can be seen that the transition through the fault ends with a ¼ slope (bilinear
flow) before reaching the second IARF stabilization (M and D= 1). When the fault is far enough
it can be observed an IARF on the well side before the fault effect.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p326/558
The figure below illustrates the effect of the distance to a high conductive fault (100000 mDft).
In this case the transition through the fault is much deeper and ends with a ½ slope (linear
flow) before reaching the second IARF stabilization (M and D= 1).
The figure below illustrates the effect of a change in the fault skin. The damage at the fault
creates a hump on the derivative and the transition through the fault happens later as the skin
increases. The distance to the fault is 1000 ft and the fault conductivity is 1.e5 mDft,
M and D=1.
The below figure shows the influence of a change in mobility. The conductive fault has a
permeability thickness product of 1000 mDft, the distance to the fault is 1000 ft, the fault skin
is zero and the permeability thickness of the well side of the reservoir is 1000 mDft.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p327/558
What is important to note here is that when the mobility increases the tell-tale bilinear and
linear flow is largely masked.
The mobility ratio changes the level of the second level of IARF (final stabilization = first
stabilization x 2/ (1+1/ M)).
Note that when M tends to infinity (ie with no ‘other side of the fault’ conductivity) the second
stabilization is twice that of the first IARF level; equivalent to a single sealing fault. Equally,
when M tends to zero, the response is equivalent to a constant pressure fault. This is in line
with the response of the linear composite reservoir model. However, for intermediate increases
and decreases in mobility the conductive fault model retains the diagnostic sign of the
presence of the fault or the fracture rather than the linear interface assumption used in the
linear composite reservoir model. The figure below illustrates this.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p328/558
In an ideal world, one behavior will occur after the other, and logic dictates that we see first
wellbore effects, then the well geometry, then IARF and/or reservoir heterogeneities, and then
finally boundary effects. The figure below shows such a simulated combination. In this case,
discriminating all behaviors would require six logarithmic cycles, provided that the parameters
are such that one behavior starts exactly where the previous ends. In the case shown below,
with IARF occurring after one hour, this means a test duration of 10,000 hours. So in real life
this will not happen, and these different behaviors will often occur at the same time.
Sometimes, and to make matters worse, they will not occur in their logical order, and a
reservoir heterogeneity may, for instance, occur after the first boundary is seen.
The name of the game is common sense. In general, these behaviors will aggregate, or one
will override the other. When the engineer has a good knowledge of how individual features
will show up on the derivative, he/she will easily figure out, qualitatively at least, what the
resulting combination may look like. But there may be ‘numerical’ surprises, and only the
generation of the model will confirm this guess.
The above is a general comment. What is specific to boundaries is that they occur at late time
and are numerically even more sensitive to errors and residual behaviors.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p329/558
In the previous sections we have described how the different boundary models will behave.
Assuming that the boundary is the last behavior in the derivative response, the principle will be
first to match the early and middle time behavior with the right wellbore / well / reservoir
model, or alternatively in the situation where the data appears to be of poor quality, figure out
why we cannot match the early time and at least get a fair estimate of the mobility, i.e. the
pressure match, in order to gain a realistic relationship between the boundary distance and the
time at which it is detected.
If this is done, the initial estimate of the boundary parameters will be relatively easy. The
mobility and the identification of the time of deviation from IARF will give the distance(s), and
various specialized analyses or interactive pick options will allow us to assess the other
parameters. For example the angle between intersecting faults will be obtained by indicating
the level of the second derivative stabilization by the click of the mouse.
After the initial model generation, and assuming that we think this is the right model, the
principle will be to adjust the boundary parameters. This will be done manually and/or by
nonlinear regression.
Manual improvements will be very effective to correct the initial gross mistakes, as the rules of
thumb are pretty straightforward. For the distances, the basic rule is that for a given mobility
the governing group is t/r². In other words, if we see that the model is showing the boundary
behavior at the ‘wrong’ time, multiplying the boundary distance by two will multiply the time of
occurrence by four. More realistically, for small multipliers (1+) where is small, it means we
get a time of occurrence % longer, we will need to set a boundary distance (/2)% further
away. There are also software related ‘tricks’ that correct the boundary distance by sliding the
match, but that will not be detailed here.
Using nonlinear regression to improve the boundary parameters will require a little more
caution. Because the boundary effects are a late time behavior identified on a relatively small
pressure change, the match will be very sensitive if the early time of the response has not
been correctly matched. If one runs an optimization on all parameters; kh, skin and boundary
distance, starting from an initial model where the infinite acting part is not matched, typically,
the boundary distance will go bananas. Advanced software features, such as genetic
algorithms, are improving the process, but these are not really needed if the engineer knows
what he/she is doing.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p330/558
The recommended procedure is to get a match first on the early part of the response. This will
be done, for example, using an infinite model and optimizing all parameters on the part of the
response that is not influenced by the boundary effect. Then, the boundary will be added, and
generally the remaining work will be to regress on the boundary parameters, keeping the other
parameters, especially the mobility, constant. Sometimes, adding the boundary will have a
small effect on the match, and it might be necessary to allow the mobility to vary, or make a
small correction to the mobility before running the regression.
The above is only valid if a good early time match is possible. Sometimes it will not be,
because early time data is messy, or the combination of wellbore / well / reservoir / boundary
models is just not available. It will still be possible to use optimization on the boundary
distance if the following rule is followed. ‘However nasty your early time match, the important
thing is that the simulated pressure, at the time where the derivative deviates from IARF, is
spot on’. This can be achieved by making a first optimization on skin only, the target being this
particular point in time from which IARF will deviate. Then, if we keep all parameters not
related to boundary, including the skin, constant, and if optimization is only performed on
points after IARF, this will work.
Last but not least, there is information on boundary effects that may be obtained beyond the
signature of the build-up derivative. This was already shown in section H, where the simulation
match could discriminate between a no-flow and a constant pressure boundary. This is what
we call the ‘missing boundary’. If we consider a closed system, we may only assess one or two
distances from the build-up response. The principle then is to add the missing boundaries at a
distance large enough to be unnoticed in the build-up, and then run a nonlinear regression on
the total production history, keeping all parameters that ensure the quality of the build-up
match constant, in order to improve the objective function. This may not always work, but if
there has been depletion during the production, matching the initial pressure will at least
provide a rough estimate of the reservoir size, even if individual boundaries are not calculated.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p331/558
rD 0.029 kt
ct
This definition is harmless as long as it is not used. But the trouble starts when somebody else
with no well test background takes this number, calculates the corresponding pore volume,
assigns a recovery factor and uses this number to assess minimum proven reserves. This is, of
course, completely erroneous, and for several reasons:
1. This definition assumes that the flow is radial homogeneous. This definition will lose its
meaning when we are confronted by complex configurations that might include fractures,
horizontal wells, lateral boundaries, composite zones and heterogeneous formations. So we
should speak about area of investigation, or better still, the volume of investigation.
2. This definition does not take into account the gauge resolution and the overall quality and
noise of the pressure response, which may or may not allow us to detect the boundary
effect.
3. What time are we talking about? In the case of a build-up, is it the duration of the shut-in,
or the duration of the test? After all, as shown in the previous section, it is possible to get
some information from the history match.
It is indeed possible to obtain a decent estimation of minimum reserves, but certainly not
using the radius of investigation. This can be achieved using boundary models as follows:
1. If you have diagnosed a closed system and managed to identify all limits, the problem is
over, and the reserve estimates will correspond to the pore volume of your reservoir times
a recovery factor.
2. If you have not diagnosed a closed system, and even if you have only used an infinite
model, the principle is to find the minimum size of the reservoir that is consistent with your
well test response. If you have diagnosed an infinite reservoir you will take the same model
and start with an arbitrarily large circular or square reservoir. If you have identified some,
but not all, boundaries you will complement and close the system with additional,
arbitrarily far boundaries. Then, in both cases, you shrink the reservoir model as much as
you can, to get to a point where, if the reservoir was smaller, you would have detected and
diagnosed the additional boundaries during the test. This is of course a much more
subjective approach than a simple equation, but, from an engineering point of view, it is
the only one that makes sense.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p332/558
There is even an extreme behavior that may be seen under exceptional conditions. In the case
of a short production time, as mentioned in a previous paragraph, the reflection of the
production information on the boundary may not have reached the boundary before the well
was shut-in. So this information will reach the well after the shut-in, followed by the reflection
of the shut-in itself. When the reflection of the production reaches the well, the derivative will
go down, then back up when the shut-in information bounces back. The derivative will then
look as below. It requires an exceptionally good signal and gauge, but it is seen from time to
time. The presence of a slight valley in the derivative should not be, of course, systematically
interpreted as such an effect, as the amplitude of the pressure signal is very small. But, in the
case of a short production time, if the model also exhibits such transition behavior, this will be
the explanation, and actually a sign of very good data quality.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p333/558
It results that the multiple limit behaviour observed on a build up can be quite different from
the expected behaviour that was described for a production period.
It is a reason why the ‘typical derivative behaviour’ catalogs have a limited interest as long as
they represent drawdown behaviour, excluding any superposition effects and we usually
interpret build data.
The figures below show not only how the build up response can differ from the drawdown
response but also how the same short production duration can have a different influence on
different intersecting faults geometry.
Fig. 8.M.2 – BU response after a short Fig. 8.M.3 – BU response after the same
production on intersecting faults short production on different geometry
intersecting faults
The same is observed in case of parallel faults: even the position of the half unit slope differs
on the build up, that limits the quantitative result validity of the conventional ‘1/2 unit slope
straight line’ drawn on the derivative.
Fig. 8.M.4 – BU response after a short Fig. 8.M.5 – BU response after the same
production on parallel faults short production on different parallel faults
When dealing with a closed system the superposition and the build up effects cumulate and
they become more crucial.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p334/558
In the figure below we compare the drawdown derivative shape with two build ups after a long
and a short production period.
It was already known that the drawdown and build up behaviour are systematically different in
case of Pseudo Steady State but it also shows clearly that the two build up derivatives are also
different and could lead to misinterpretation.
The long production duration build up derivative (in green) dips at the P.S.S.
The short production duration build up derivative (in blue) dips also but additionally it is
subject to a deviation at intermediate time.
The diagnosis of multiple boundaries on build up data is not as straight forward as it appears if
we only rely on the drawdown typical derivative behavior. The shape in highly influenced by
the previous production history and only a model derivative taking into account the various
superposition effects must be used for a final match with the data.
8.M.3 Conclusion
The only pure conventional boundaries effect can be observed on the production period data,
assuming a constant production rate but the production period data are usually noisy due to
rate instabilities.
The build up data, supposed to be more stable, are therefore classically used for analysis
purpose, and their sensitivity to the superposition effect makes the diagnosis more difficult, it
requires more experience and impose the used of software based on the model matching
method.
A simpler answer could be brought if the build up data, victims of the superposition, could be
‘de-superposed’ in order to appear as a drawdown response to a constant rate, this is called
the Deconvolution method and it will be developed in the next chapter.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p335/558
In the figure below, on the left, a partial production history (last 60 hours well test data) was
used to generate the loglog plot of a field data set. The derivative exhibits a clear ‘boundary
effect’: it is actually the result of a false superposition calculation with a too short production
time.
The right side plot shows the result when using the complete production history (adding 440
hours of previous production): no more boundary effect can be too ‘easily’ diagnosed.
Fig. 8.N.1 – Loglog field data response Fig. 8.N.2 – Loglog field data response
with IARF using incomplete with correct production history
production duration
Using a false too long production history (1000 hours more) can also have an incidence on the
derivative shape. The resulting curve, below, does not really show a boundary effect, but the
apparent mobility will be over-estimated. In other cases the response will display an artificial
constant pressure boundary.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p336/558
Generally speaking, this rule of thumb works: when you under-estimate your production time
you under-estimate your reservoir, i.e. your kh will be too low or you will see a no-flow
boundary that does not exist. When you over-estimate your production your over-estimate
your reservoir, i.e. your kh is too high or you will see a constant pressure boundary that does
not exist.
Most algorithms take a superposition time window around the considered point, the smoothing
being the distance between the point and each window limit. The program then selects, to the
left and the right, the first points outside the window, and with these two additional points,
makes a 3-point central calculation of the slope.
In the middle of the pressure response there is no problem. But, towards the end of the
response the last point will be systematically used as the right point in the derivative
calculations of all the data within the smoothing window. If this last point is wrong, and it may
not even be displayed on the loglog plot, it will induce a trend in the smoothing that may be
confused with a boundary effect.
Even if the way to calculate the derivative is different, an incorrect end point will generally
create this sort of error.
The best way to avoid this problem is to properly set and synchronize the rate and pressure
histories. But if this is overlooked, one way to check this sort of problem is to change the
derivative smoothing. If the late time, boundary looking behavior changes with the smoothing,
then the engineer will have to look for a possible erroneous point.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p337/558
When several gauges are run it is easy to identify drifting gauges if proper Quality Control is
performed. If only one gauge is run, then one should select a type of gauge that will be
unlikely to drift within the expected test duration.
The loglog plot below shows a comparison between two gauges used for the same test. Before
deciding which gauge was correct, it was necessary to make a basic calibration check (i.e. at
atmospheric pressure). In this case a 9 psi indicated pressure increase was found on the
‘guilty’ gauge, between before and after the measure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p338/558
In a producing well, when you shut a well in and have not done anything recently on the
nearby wells, the residual transient signal due to these wells will be negligible for the duration
of the test. But for long term testing, or if the production of the nearby wells has changed,
they may induce false boundary effects.
The author recalls an interpretation he had to do on a producing well on which a simple build-
up survey was run. The response showed a constant pressure boundary that was making
absolutely no sense. After (too) much time the author discovered that, in order to assure
constant production, the production of the closest well had been temporarily increased
simultaneously with the shut-in, hence creating an opposite image well, hence creating a ghost
boundary at half the distance between the two wells.
When the engineer is in doubt about the influence of nearby wells, the best thing is to run a
first interpretation without the wells, then run a second model adding the interference of
nearby wells. If the difference between the two models is negligible, the engineer may return
to the simpler interpretation and ignore the effect of interference.
Below left is the pressure response acquired during the final build up of a well test, it can be
adjusted with an analytical multilayer model. But if we compare this response with the one
given during the previous build up it reveals an inconsistency see on right side plot.
Fig. 8.N.5 – Build up raw data matched Fig. 8.N.6 – Comparison between two
with a 2-layer model build ups Loglog plot response
In fact a nearby well (2200 ft distance) was opened for 50 hours and then closed during this
test, creating interferences. Another well test performed in quiet conditions no other well was
opened or closed but just left in their status reveals the truth, an homogeneous behavior:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p339/558
8.O Deconvolution
As already mentioned above, the analyses are generally performed on the build up data for
acquisition and stability reasons. Unfortunately these periods are limited in time and most of
the information is contained in the ‘noisy’ production periods. Another fact, in addition to the
short duration the build up data are subject to the superposition effect influences and the
boundaries diagnosis become more difficult.
One solution is to get rid of the superposition effect and to take advantage of the long duration
of the production periods. This solution passes through the technique called deconvolution: in
few words it transform a limited duration build up into a drawdown pressure response with a
duration equal to the considered production history, consequently, it makes possible to detect
boundary effects in a test signal when the individual flow period was too short to detect said
boundaries.
We get a complete rate history but the pressure history data is poor and only the detailed
acquisition for the last build allows an analysis:
8000
6000
15000
10000
5000
0
500 1000 1500 2000 2500 3000 3500
Fig. 8.O.1 – Production and rate history
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p340/558
Although the build up data have a good quality, its relatively short duration (110 hours) limits
the depth of the analysis to the observed boundaries.
If we use the build up pressure data and the complete previous production history for a
deconvolution we get the loglog plot shown below:
Advantages:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 8 – Boundaries - p341/558
8.P Conclusion
Boundary effects are not systematically detected during a well test, but the identification of
boundaries is one of the most important results of Pressure Transient Analysis. Such
identification provides valuable information on proven volumes and reservoir geometry.
Even the lack of boundary effects is information by itself. The smallest closed system that can
reproduce the infinite looking, or only partially sealing, response will produce a much more
reliable estimate of the minimum reservoir size; far more reliable than using an equation of
‘radius of investigation’.
The limits of the well drainage area may be reservoir boundaries or ‘virtual’ boundaries coming
from the production equilibrium with other wells. They may be encountered in the producing
phase of very long tests, or when using permanent downhole gauge (PDG) data. Production
Analysis is also a good candidate to interpret this data.
When analyzing a build-up, boundaries are generally detected at the end of the test by
identifying deviations of the Bourdet derivative from Infinite Acting Radial Flow. A significant
deviation of the derivative may correspond to a very small deviation of the original pressure
signal. Numerous different causes, unrelated to reservoir limits, may incorrectly mimic the
response of a boundary effect: for example, incorrect rate history, gauge drift, reservoir trends
and interference from other wells.
Before rushing into the diagnostic of a reservoir limit, the pressure transient analyst must to
check that there is no other explanation. The sensitivity to the well production history, and its
integration in the calculation of the Bourdet derivative, is a critical point to consider. The
boundary distances must also be compatible with prior knowledge of the reservoir. Talking to
the geologist is a key to gaining confidence in the boundary diagnostic.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p343/558
9 – PVT
OA – OH – OSF – DV
9.A Introduction
Pressure-Volume-Temperature, PVT, is the study of the physical behavior of oilfield
hydrocarbon systems. Information on fluid properties such as volumes and phases, and how
they change with temperature and pressure, is essential to many aspects of Dynamic Data
Analysis.
The need for valid PVT data cannot be over-stressed. Reservoir fluid sampling should be a
major objective of any test, particularly during drill-stem testing, as PVT analysis should be
performed as early as possible in the life of the field. Once production has started, it may
never again be possible to obtain a sample of the original reservoir fluid, which may be
continually changing thereafter.
Calculate fluid phase equilibrium and phase compressibility to correct produced volumes
from surface to sandface
Calculate fluid phase equilibrium and phase gravities to correct pressures to datum
The present book is not aimed at teaching PVT, which is an ‘art’ in itself. However the validity
of PTA / PA diagnostics and results is largely dependent on the knowledge of the fluid PVT. It is
difficult to assess where we should start this subject. So, we will assume nothing and start
from scratch, with our humble apologies to those who have read this a hundred times before.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p344/558
The simplest form of phase behavior is for a pure substance, such as methane or water, and it
can be represented on a simple 2-dimensional graph of pressure versus temperature
in the figure below:
The boundary lines between the solid, liquid and gas phases represent values of pressure and
temperature at which two phases can exist in equilibrium. Crossing the liquid-gas curve from
left to right at constant pressure, by increasing the temperature, corresponds to heating the
liquid to its boiling point and boiling it off as gas or vapor.
If the gas is then cooled in the reverse process, it will condense at the same temperature.
There is no upper limit to the solid-liquid equilibrium line, but the liquid-gas line or vapor
pressure curve, terminates at the critical point. At pressures or temperatures above this
point only one phase can exist, referred to only as fluid because it has properties similar to
both gas and liquid close to the critical point.
For a pure component, the typical Pressure-Volume behavior at constant temperature is shown
above. Starting in a single-phase liquid state, the volume is increased which causes a sharp
pressure reduction due the low liquid compressibility. The point at which the first gas bubble
appears is the bubble point. When the volume is further increased the pressure remains the
same until the last drop of liquid disappears; this is the dew point. Past that point, only gas
exists and as the volume increases, the pressure is reduced. By comparison not as quickly as
with the single phase liquid, because of the higher compressibility. The following right figure
brings together the P-V and P-T behaviors into a 3-dimensional surface.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p345/558
Instead of a single vapor pressure curve there are separate lines to represent the bubble
points and dew points of the mixture. The two-phase boundary of the system can extend
beyond the critical point, i.e. above the critical pressure or the critical temperature. The critical
point lies at the intersection of the bubble and dew-point curves, and no longer serves to
identify where the single phase region begins.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p346/558
With most reservoir systems it is normal to concentrate only on the liquid-gas equilibrium
behavior, although some hydrocarbons do exhibit solid phases, such as wax precipitation
(solid-liquid) and gas hydrate formation (solid-gas). Natural hydrocarbon fluids can contain
hundreds of different pure substances, and the multiple interactions may lead to a great
number of phase loops where liquid and gas phases can exist in equilibrium over a wide range
of pressures and temperatures.
The behavior of an oil reservoir with decreasing pressure would be as follows: At reservoir
temperatures below the critical point, the two-phase region is entered via a bubble point,
where there is initially 100% liquid. As pressure continues to fall, gas is liberated and the
system reaches the 20% gas / 80% liquid line, and so on.
When it is above the bubble point, the oil is said to be under-saturated. At the bubble point,
or anywhere in the two phase region, the oil is said to be saturated.
For a fluid above the critical point at ‘A’, pressure reduction will pass through a dew point with
initially 0% liquid. Continued pressure reduction condenses more liquid through the 10% and
20% curves. It is this ‘reverse’ behavior, of pressure reduction condensing gas to liquid, which
has led to the term retrograde. Below a maximum liquid percentage, at ‘B’, re-evaporation
begins (normal gas-condensate behavior) and the system can go through a second dew-point
at ‘C’, if the original fluid composition has not changed.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p347/558
Oil: When the reservoir temperature is below the critical point, the fluid behaves as oil and
shows a bubble point.
Retrograde gas condensate: Between the critical temperature and the upper
temperature limit of the two phase region, the fluid will be a retrograde gas condensate,
and exhibit a dew-point.
Single phase gas: If the reservoir temperature is above the upper temperature limit of
the two phase region, the fluid will be a single phase gas.
Single phase gas reservoirs can be dry gas or wet gas, depending on whether liquid is
collected in surface separation facilities.
Divisions between the sub-types are not clearly defined, except in the case of single-phase
gas, but usually give a good idea of the behavior of the fluid.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p348/558
Initial Producing GLR = Initial producing Gas Liquid Ratio at surface conditions
141.5
Initial oil API gravity; API 131.5
o 60F
Non Volatile
Condensate Wet Gas Dry Gas
Volatile Oil Oil
Initial
producing GLR, < 2000 2000–3300 3300–50,000 > 50,000 > 100,000
scf/stb
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p349/558
Vg_sc; g
Vg_sc; o
Vo_sc
Stock-tank oil
Vg Vo
Reservoir: Tres, P
We consider the situation described above where reservoir gas and oil at some pressure P are
brought to surface. The oil goes through a series of separation stages. The volumes are noted
as follows:
It is assumed that no liquid evolves from the gas when it is brought to surface; the Modified
Black-Oil accounts for this situation – see the Wet gas section.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p350/558
With the above definitions and assumptions the usual Bo properties are given:
Vg _ sc; o
Solution Gas Oil Ratio, Rs: Rs
Vo _ sc
Vo
Oil Formation Volume Factor, Bo: Bo
Vo _ sc
Vg
Gas Formation Volume Factor, Bg: Bg
Vg _ sc; g
Vo Rs Vo Vg
Vo _ sc ; Vg _ sc
Bo Bo Bg
Bo, Bg and Rs, are functions of pressure and temperature. They can be obtained from
correlations or laboratory studies.
We write Rsb = Rs(Pb), Bob = Bo(Pb). If we consider the separation process, it is important to
realize that Rsb and Bob do depend on the separator(s) conditions. A typical behavior is shown
below with varying separator pressure.
The selected separator process should be the one providing the minimum R sb.
API
Rsb
Bob
Separator P
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p351/558
When dealing with wet gas it is common to calculate the reservoir gas gravity from the surface
effluents. We assume that there is one separator stage before stock-tank conditions as
pictured below:
Vg_sep
Vo_sc
Vg
Reservoir
flj
Fig. 9.D.3 – Wet gas production
The volume, at standard conditions, and the specific gravity of the gas produced at the
separator and the tank are measured. The corresponding gas-oil-ratios, referenced to the
stock tank oil volume are calculated:
V g _ sep V g _ tnk
R sep ; Rtnk ; and the total gas-oil ratio is: R Rsep Rtnk
Vo _ sc Vo _ sc
The modified formulation accounts for liquid condensation from the gas. The properties are
defined from the volumes in the following schematic.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p352/558
Vg_sc;g
Vo_sc;g
Vg_sc;o
Vo_sc;o
Stock-tank oil
Vg Vo
Reservoir ; Tres, P
Vg _ sc; o
Solution Gas Oil Ratio, Rs: Rs
Vo _ sc; o
Vo
Oil Formation Volume Factor, Bo: Bo
Vo _ sc; o
Vg
(Dry ) Gas Formation Volume Factor, Bg: Bg
Vg _ sc; g
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p353/558
A new parameter is added to quantify the oil produced from the gas:
Vo _ sc; g
Solution Oil Gas Ratio, rs: rs
Vg _ sc; g
Vo Vg rs Rs Vo Vg
Vo _ sc ; Vg _ sc
Bo Bg Bo Bg
Most MBO formulations assume that all surface oils are the same. In other words the gravity of
the oil produced from the reservoir oil and the one produced from the reservoir gas are
assumed to be the same. A similar assumption is made for the surface gases.
Vg_sc;w
Separator
Vw_sc
Stock-tank
water
Vw
Reservoir
flj
Fig. 9.D.5 – Black Oil property definitions for water
Vg _ sc; w
Water gas solubility, Rsw: Rs w
Vw _ sc
Vw
Water Formation Volume Factor, Bw: Bw
Vw _ sc
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p354/558
Specific gravity Ratio of the gas density at standard conditions, with air density at standard
conditions.
g sc
g
air sc
Formation volume factor: relates the volume in the reservoir to the volume at standard
conditions; defined by Z.
Vres Zp scT
Bg
V sc pTsc
1 dV 1 dB g dZ
cg 11
V dP T dp
Bg T p Z dp T
Gas viscosity: measure of the resistance to flow. Difficult to obtain from measurement, usually
determined from correlations.
p g
g air sc V sc
ZRT
where Vsc = volume of one gas mole at standard conditions: 379.4 scf
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p355/558
Fig. 9.E.1 – Z factor vs. p [psia] Fig. 9.E.2 – Bg [scf/rcf] vs. p [psia]
Fig. 9.E.3 – cg [psi-1] vs. p [psia] Fig. 9.E.4 – g [cp] vs. p [psia]
With the measurement of the two specific gravities g_sep and g_tnk, we define the average
surface gas gravity as:
And the specific gravity of the reservoir gas is then given by (R in scf/STB):
R g 4,600 o
gr
R 133,300
o
M
With o the stock tank oil gravity, Mo the condensate molecular weight.
42.43
Mo
1.008 o
All reservoir gas properties are then estimated using the reservoir gas gravity.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p356/558
Formation volume factor: Reservoir oil / surface oil. Obtained from lab study, or from a
correlation.
1 dV
co ;
V dP T
1 dBo 1 dBo dR
Above Pb: c o ; Below Pb: c o B g s
Bo dp T B o dp T dp T
Solution gas-oil ratio: gas dissolved in oil; lab study result or from correlation.
Fig. 9.E.5 – Bo [rb/STB] vs. p [psia] Fig. 9.E.6 – co [psi-1] vs. p [psia]
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p357/558
In addition, when gas is present, it may dissolve in the water leading to the consideration of
Rsw Gas solubility.
Density: The density at any condition is obtained by dividing the standard condition density by
Bw. The effect of dissolved gas on the density is usually ignored.
w sc
w
Bw
Formation volume factor: Reservoir water / surface water. Several effects are involved:
evolution of dissolved gas from the brine as P and T change, the contraction of the brine.
Obtained from correlation.
1 dV
cw
V dP T
1 dBw 1 dBw dR
Above Pb: c w ; Below Pb: c w B g sw
Bw dp T Bw dp T dp T
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p358/558
Fig. 9.E.9 – Rsw [scf/STB] vs. p [psia] Fig. 9.E.10 – cw [psi-1] vs. p [psia]
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p359/558
Even though the problem is not different from what is done in other fields such a full-field
reservoir simulation, a numerical model turns out to be inadequate to simulate short transients
in a bubble point or dew point system. This is not linked to the PVT representation but rather
to the way of estimating mobilities from average cell saturations. This aspect is not covered in
this chapter. Instead we focus on the classical analytical approaches.
Oil reservoir treatment in DDA can be made in different ways: single phase oil, Perrine, or
using an oil pseudo pressure.
ct c f So co S wcw S g cg
Assuming no gas in the reservoir the last term can be removed. The influence of connate water
may be considered.
In the simplest situation, DDA treats the three parameters ct, o, and Bo as constants. They
are estimated at the reservoir temperature and a given reservoir pressure.
9.F.2 Perrine
The Perrine method considers multiphase flow using a single mixture fluid downhole and treats
this fluid as a liquid analogue. The PVT properties involved (FVF’s, solution gas-oil ratio, etc),
are estimated once and for all for the reservoir temperature and some given reservoir
pressure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p360/558
qo Bo qw Bw q g qo Rs Bg
Internally a surface oil equivalent can be used, defined as the above expression divided by Bo.
qo B qw Bw q g qo Rs Bg
qt
Bo
This is the surface oil rate that would give the same downhole total rate.
In order to define the total compressibility, fluid saturations in the reservoir are needed, and
they are assumed constant.
ct c f So co S wcw S g cg
Then, a single phase liquid analysis is conducted, on the basis of total mobility:
ko kw kg
t
o w g
In Ecrin an oil equivalent permeability ko_equ is defined, which is the total mobility, multiplied
by the oil viscosity:
ko _ equ t o
The main assumption of the Perrine method is that the ratio of the fluid mobilities is equal to
the ratio of the downhole productions. This can be used to express the effective permeabilities
as follows:
ko qo Bo kw q w Bw kg
t
q Rs qo Bg
t t
g
; ;
o qt Bo w qt Bo g qt Bo
If the relative permeabilities are known the value of the absolute permeability can be
calculated with:
ko k kg
k w
k ro k rw k rg
Two major assumptions of the Perrine’s approach are that (1) reservoir saturations are
constant (2) the ratios of mobilities are equal to the ratios of the downhole productions. By
combination, those assumptions should entail that the production ratios are also constant. In
fact, Perrine’s method is usually applied on a given build-up, and only the last rates before this
build-up are considered. The implementation in Ecrin is an extension that should be used with
care should the production ratios vary significantly.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p361/558
The integration of the viscosity and formation volume factor in the pseudo pressure permits a
consistent analysis where the pressure and thus the PVT properties vary significantly over a
period of time. This is provided that one considers periods where the pressures are above the
bubble point. Typically a series of build-ups over a long history, each one being at pressure
levels above the bubble point, will give consistent kh’s. Note however that if the FVF and
viscosity are assumed to vary, the change in compressibility is not captured by the pseudo
pressure, so the pseudo pressure is only applicable for getting adequate permeability.
Estimates of sizes, boundary distances with a varying PVT will not be reliable.
k ro g o L
k rg o g V
Where L and V are respectively the mole fractions of Liquid and Vapor.
There are several potential problems with this approach. The relative permeabilities must be
known and they will have a direct and important impact of the shape of the pseudo-pressures,
and thus on the diagnostic made from looking at the response in terms of m(p). The necessary
relation between saturation and pressure is typically based on a near wellbore behavior and
this may not be valid elsewhere in the reservoir. For closed systems in particular, it has been
shown that the pseudo-pressure will not provide an analogy with the liquid case in pseudo-
steady state. Finally, in cases where the pressure in the reservoir varies significantly it is also
necessary to worry about the variation of the total compressibility. Pseudo-time functions can
be considered but again the saturations are part of the equations and this approach relies (as
with m(p)) on the assumption of some relation between pressure and saturations, with the
same potential dangers.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p362/558
m p 2
p
dp
0
Z
Using pseudo-pressure alone assumes that the viscosity - compressibility product is constant.
When this assumption is not valid, time may be replaced by a pseudo-time, see Chapter
‘Theory – The case of dry gas’.
t ct ref
(Normalized version) t ps (t ) I ( pwf ( ))d where I ( p)
0 ct
We saw also in precedent chapters that an equivalent function can be defined in order to
correct for material balance, see Chapter ‘Theory - The case of dry gas’.
R g 4,600 o
gr
R 133,000
o
M
o
Corrected rates qt are used in the analysis and defined as: q geq q g (1 133,000 )
M o .R
When dealing with gas-condensate, the same approach is often used even though the single
phase m(p) is not strictly applicable within the reservoir. The formation of condensate within
the reservoir will result in varying saturations and thus mobilities. This may be revealed on
transient responses as radial composite reservoirs and analyzed as such. Some studies suggest
that close to the well the fluid velocity may sweep the condensate more efficiently thereby
causing a three mobility zones. Specific models have been developed for this situation.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p363/558
k
P
k rg g k rw w
0
m( p) ro o
o g
w
.dP
And the same steady-state assumption relating production and saturations is usually made:
k ro g o L
k rg o g V
As stated before there are a number of potential problems with this approach, starting with the
validity of the steady-state assumption. The pseudo pressure shows two distinct sections,
below and above the dew point, with two distinct slopes. This change in slope which is
supposed to correct the distortions due to the presence of condensate, and its effect on
mobility, can produce such corrections that may create erroneous responses.
As in the oil case, pseudo pressure may work for the sake of a getting kh from a semilog
analysis during a given flow period. Beyond that, their use is limited and dangerous.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p364/558
Separator Tests
In addition to measuring the physical properties of the reservoir fluid, PVT laboratories also
determine the chemical composition. Because of the enormous number of individual
components present in most hydrocarbons it is not possible to identify them all, and similar
components are grouped together. Typically, individual measurements are given for the non-
hydrocarbons; nitrogen, carbon dioxide and hydrogen sulphide, and for the hydrocarbon
groups of primary paraffins, methanes to butanes. Heavier hydrocarbons are then grouped
according to the number of carbon atoms, and a final fraction groups all the unidentified heavy
components.
In a CCE experiment, a reservoir sample is placed in a cell at a pressure greater than or equal
to the reservoir pressure, and at reservoir temperature. Pressure is successively reduced by
modifying the cell volume. The total hydrocarbon volume Vt is noted and plotted versus
Pressure.
oil gas
T=Treservoir
Pb
Vt Vt
Vt Vt
Hg Hg Hg Hg
With a fluid showing bubble point behavior, there is a dramatic change of slope as the first
bubble of gas forms and the gas phase compressibility dominates the two-phase behavior.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p365/558
In contrast, for gas condensates at the dew point pressure the first drop of liquid that forms
has a negligible effect on the overall compressibility, and there is no identifiable change in
slope. This curve will also be seen for single phase gases, and some volatile oil fluids near the
critical point, where gas and oil phases have similar properties. In these cases laboratory
studies are performed in a ‘window cell’, where the formation of a drop of liquid or a bubble of
gas can be physically observed. This need for visual inspection is a major handicap in trying to
identify samples at the wellsite.
The DLE experiment is designed to represent the depletion process within the reservoir. A
sample of reservoir fluid is brought to reservoir T and bubble point pressure. The pressure is
reduced by changing the cell volume. Gas is expelled while maintaining constant pressure. The
gas volume Vg and gravity are measured, as well as the oil volume. The step is repeated until
atmospheric pressure Psc is reached. The temperature is then reduced to 60°F and the
residual oil volume is measured, Vor.
Vg1 Vgn
Hg Hg Hg Hg Hg
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p366/558
Gas Z factor
The Differential qualifier, and the ‘D’ trailing character on the properties, is introduced because
the volumes determined in the DLE experiment are referenced to the residual oil volume Vor.
This is different from the reference stock-tank oil volume obtained when going through
different surface separator stages. The conversion of the DLE results to the usual BO
properties is explained in section Converting Fluid Study Results.
Vg i
R sD Pk
Vo k
i k
; BoD Pk ;
Vor Vor
Vg1
Vg2
Vob
Vo_sc
Hg
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p367/558
RsSb where the subscript ‘S’ indicates that this is from a separator test and ‘b’ indicates that
the original sample pressure is bubble point pressure.
The gas volumes expelled from the various separator stages are expressed at standard
conditions. The above properties are then calculated as:
Vg i
Vob
R sSb i
; Bo Sb
Vo _ sc Vo _ sc
At pressures below the bubble point it is assumed that the process in the reservoir is
represented by the DLE, and the process from bottomhole to the surface is represented by
Separator Tests.
We note RsDb = RsD(Pb) and BoDb = BoD(Pb). The DLE values are corrected using the separator
test values as follows:
Bo Sb
P Pb Bo P Bo D P ; R s P R s Sb R s Sb R s D P
BoSb
Bo Db Bo Db
At pressures above the bubble point, properties are obtained from a combination of CCE and
Separator tests.
P Pb Bo P Vt P ; Rs P Rs Sb
Vb
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p368/558
With the evolution of computers in the oil industry, many commonly-used correlations have
been converted into equation form to facilitate their use. However, the equations are never a
perfect fit to the original graphical relationship, so additional errors are incurred. For example,
the Dranchuk and Abou-Kassem equations for the Standing-Katz Z chart add an average error
of 0.3% to the values. Similarly, equations used to represent the Carr et al gas viscosity
correlation introduce an average error of 0.4%. In most cases such additional errors are
acceptable, but much larger than average errors do occur, particularly as the correlations can
be used outside their acceptable range of application. Some more recent correlations have
been developed directly in equation form, which avoids the additional fitting error.
Correlations are invaluable to the industry, but experimental data should be obtained wherever
possible. If a choice between correlations has to be made, the ideal approach is to make
comparisons with real PVT data on a number of similar fluids, to identify the most suitable
correlation and to obtain an idea of the likely errors involved. If this is not possible, it is best to
use the correlation derived from data of similar fluid type and geographical origin. It may also
be advisable to compare results obtained from different correlations, to see if any key
parameters are particularly sensitive to this choice. In the following sections we are providing
some information that may be relevant in the selection.
P T
The reduced pressure and temperature are defined as Pr ; Tr ; where Pc and Tc are the
Pc Tc
pseudo-critical pressure and temperature. Recall from the beginning of this chapter the
definition of critical point for a pure component. The pseudo-critical properties are the mixture
equivalent and they are obtained either by mixing rule or correlation.
A correlation by Standing gives the pseudo-critical properties from the gas specific gravity. A
correction is required when sour gases are present (C02, N2, H2S), and this is based on a
correlation by Wichert and Aziz.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p369/558
The first two are the most accurate in representing the chart. Beggs and Brill is limited in
range so the others should be preferred.
Carr et al: gives a table for g(T,P)/g_sc(T,P) and a way of obtaining the standard
conditions viscosity from Tr,Pr, and the amount of sour gases.
Lee et al: used by most PVT laboratories reporting gas viscosity, relates g to molecular
weight, density, and temperature.
These correlations predict gas viscosities with an error of about 3% in most cases. When the
gas gravity increases about 1, i.e. with rich gas condensate, the error can increase to 20%.
9.K.2.a Pb and Rs
The correlations typically give a relation between bubble point pressure, solution gas-oil-ratio,
oil and gas gravities. When using the total GOR as an input, the correlations provide Pb.
Conversely, when using any pressure below Pb, they give R s(P). Below is the list of correlations
used in Ecrin.
No significant difference should be expected between those correlations for most cases.
Standing and Lasater correlations are recommended for general use. Correlations developed
for a particular region should probably be used in that region.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p370/558
As for Pb and Rs, the estimates are based on total solution GOR, oil and gas gravities. The
prediction of the above are in most situations fairly close.
1. Determination of the dead oil viscosity, oD correlated in terms of gravity and temperature.
There exist different correlations for those steps. In Ecrin three methods are offered
generically called ‘Beal’, ’Beggs and Robinson‘, and ’Glaso‘. They correspond to the following
combination:
Beggs and Robinson: 1 = Beggs & Robinson, 2=Beggs & Robinson, 3=Vasquez and Beggs
The estimate of dead-oil viscosity by correlations is very unreliable. Lab values should be
preferred whenever available.
Vasquez and Beggs: Actually combines a correlation by McCain below Pb and Vasquez and
Beggs above Pb.
Petrosky and Farshad: combination of McCain below Pb and Petrosky and Farshad above.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p371/558
McCain
Osif
Helmoltz
It is always desirable to adjust the prediction of fluid properties obtained from correlations.
On-site measurements can be used for that purpose or else values resulting from PVT studies.
In the latter case however if a full PVT study is available this should be used rather than the
correlations.
When correlations are applied with known values, the modification made to the properties is
usually linear using a factor and a shift. The two fitting parameters can be obtained by running
a non-linear regression on any number of constraint points. For some properties, the
modification is different as reference values must be preserved. This is the case for formation
volume factors or gas Z-factor, with B(Psc)=1 and Z(Psc)=1.
In the Black Oil model, we have seen that the oil and gas properties are obtained as a function
of the surface component gravities, and the solution GOR. There is a one-to-one relation
between total GOR and bubble point.
This can be considered as the definition of the fluid composition, as explained in the next
section.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p372/558
For given surface gravities, the effect of varying the GOR is shown below. The saturated
sections up to the bubble points are the same, whereas a funnelling is seen above Pb.
The saturated section is therefore a function of the surface component gravity only, and the
value of any property at a particular pressure requires only the additional knowledge of the
GOR, or equivalent Pb.
When using match values, those values may characterize a fluid with a GOR different from the
one under consideration. If this is the case then it may not be relevant to compare those
values and the values predicted by the correlation. Only if the GOR is the same can values be
compared for any pressure.
As a rule, the input to correlations should be the latest set of consistent production GOR, and
surface oil and gas gravities. When constraining this PVT with values originating from lab
studies of some initial fluid with GORi and Pbi, beware that undersaturated values cannot be
matched if the GOR has significantly evolved. For the saturated section, values can be
matched only for pressures below min(Pb,Pbi).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p373/558
phase w o g phase w o g
W x W x
O x x O x x
G x G x x
The oil and gas components are the stock tank oil and total separator gas, as defined in the
previous sections. All reservoir fluids are a mixture of these components. The phases can be
characterized by their composition, fraction of each component expressed in a conservative
quantity, for instance the mass. So the phases are defined by their mass composition Co, Cg
and Cw. We write C pk the fraction of component k in phase p. The compositions add up to
unity: C
k
k
p 1.
The phase envelope of the Gas-Oil equilibrium for a real mixture of two components similar to
a reservoir sample can be plotted on a Pressure-composition plot as shown below. The plot
considers a general MBO situation where oil can exist in both oil and gas phases, and similarly
for gas.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p374/558
Two curves have been plotted: in red, for reservoir temperature Tr in blue for standard
temperature Ts. Each mixture of known composition at a given pressure is represented by a
point on this graph.
The D mixture at pressure P is inside the envelope: it is the 2-phase region. The 2 phases
are characterized by the points O and G, of compositions COB (P) and CGD(P). The mass
fraction of each component in each phase is given by:
For the oil phase, point (O): For the gas phase, point (G):
- mass fraction of oil COo 1 COB P - mass fraction of oil CGo 1 CGD P
- mass fraction of gas COg COB P - mass fraction of gas CGg CGD P
The B mixture is a dry gas which has no dew point at reservoir temperature, but has 2 dew
points at standard temperature Ts.
The critical pressure Pcrit at reservoir temperature is the maximum pressure where both
phases can exist. The corresponding composition Ccrit is the critical composition at Tr. As we
assume that the components are the phases at standard conditions, the components used in
the model are the Oil phase in standard conditions, called oil, and the dryer gas phase at Ts,
called gas.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p375/558
The same diagram can be plotted in the ideal case for the BO and the MBO models. In the BO
model, the gas phase always contains only gas.
Pmax Pmax
P P
O G O G
The composition at bubble point of the oil phase, and at dew point of the gas phases, can be
obtained from the Black Oil properties and the oil and gas densities at standard conditions.
C O g sc
Rs ( Pb) with CO COB Pb => C O 1
1 CO o sc 1
o sc
R s Pb gsc
rs( Pd )
1 CG o sc with CG CGD Pd => C G
1
CG g sc sc rs Pd
1 o
gsc
As illustrated in the previous section, this indicates that once the surface gravities are known,
the fluid is defined by the knowledge of the function Rs(Pb), which is the same as the under-
saturated gas-oil ratio (and rs(Pd)). Visualizing the problem in terms of compositions makes it
easier to consider mixture of varying compositions for which the bubble point and/or dew
points can be readily obtained on the COB (P) or CGD(P) curves. For other properties in the
2-phase region, all that is required is the knowledge of o(Pb) and o(Pb) for the expected
range of bubble points. In the single phase region (undersaturated case) an assumption can be
made to estimate the properties. A common hypothesis is that of constant slope, ie the
derivative of the particular property with pressure.
This is the basis for the multi-phase numerical model of Ecrin. This model can be fed with a set
of correlations, or may use tabulated properties resulting from a PVT study.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 9 – PVT - p376/558
Z 3 A2 Z 2 A1 Z A 0
RT a
P 2
V b V
in which b represented the volume of individual molecules, and a was the inter-molecular
attraction.
The two most common equations of state used in PVT calculations are:
RT a
Soave-Redlich-Kwong (SRK): P
V b V V b
RT a
Peng-Robinson (PR): P 2
V b V 2bV b 2
Equations of State are used in compositional simulators, as opposed to Black Oil, and they are
a sine qua none condition for considering such problems as C02 flooding, mixture of several
reservoir fluids, etc.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p377/558
10 – Numerical models
OH – ET – OSF – DV
10.A Introduction
Numerical models are becoming increasingly popular in well test analysis, mainly because they
address problems far beyond the reach of analytical and semi-analytical models. The two main
areas of usage of numerical models are nonlinearities, such as multiphase or non-Darcy flow,
and complex reservoir or well geometries. Numerical models can also be used to replace rate
by pressure constraints when the well flowing pressure goes below a certain point, hence
avoiding the embarrassing negative pressures often generated by analytical models.
The first attempts at numerical well testing were done ad hoc across the industry by engineers
using standard reservoir simulators with local grid refinement. In the early 1990's, the first
industrial project involved pre-conditioning of an industry standard simulator using PEBI
gridding. Since then, several technical groups have been working on numerical projects
dedicated to transient analysis.
In recent years, improvements in automatic unstructured grids and the use of faster
computers have allowed such models to be generated in a time that is acceptable to the end
user. The change has been dramatic, the time required to calculate the solution has decreased
from days to hours, then to minutes, and now, for linear diffusion problems, to seconds. Using
gradient methods even nonlinear regression is possible, and improved well-to-cell models allow
simulation on a logarithmic time scale with little or no numerical side effects. Last, but not
least, automatic gridding methods allow such models to be used without the need for the user
to have a strong background in simulation.
The main goal of numerical models is to address complex boundary configurations, but this
part of the work is actually easily done by any simulator. The problem is to also address what
is easily done by analytical models, i.e. the early time response and the logarithmic sampling
of the time scale. This requires, one way or the other, to get more grid cells close to the well,
and this has been done using three possible means: local grid refinement of cartesian grids,
unstructured (Voronoi, PEBI) gridding, or finite elements. These different options, shown
below, have their pros and cons that are beyond the scope of this book.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p378/558
Warning: KAPPA has taken the option to develop a numerical model using the Voronoi grid.
This chapter describes this option only, and as such is far more software related and vendor
specific than the rest of this book.
Numerical models adapted to DDA include an interactive process to define the geometry, an
automatic grid builder and a simulator. The simulator will typically have a linear and a
nonlinear solver.
When the diffusion problem to model is linear, taking the same assumption as in an analytical
solution, the process only requires one iteration of the linear solver for each time step. The
solution is very fast and the principle of superposition can be applied. In this case, the
numerical model acts as a ‘super-analytical-model’ which can address geometries far beyond
those of an analytical model. This is developed in Section ‘Handling linear problems’.
When the problem is nonlinear the numerical module is used more like a standard simulator,
with ‘just’ a grid geometry adapted to a logarithmic time scale. The nonlinear solver is used,
iterating on the linear solver. Nonlinearities are developed in Section ‘Handling Non-linear
problems’.
A numerical model can also be used to change the well constraint in time. For each well, a
minimum pressure is set, below which the simulator changes mode and simulates the well
production for this minimum pressure. This is also developed in Section ‘Handling Non-linear
problems’.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p379/558
Any segment of this polygon may be set as a sealing or constant pressure boundary. If inner
boundaries are present, any number of polyline faults may be drawn with control of individual
fault transmissibility. Individual wells (vertical, horizontal and/or fractured) may be created
and positioned, and their corresponding production history entered. Later, when the model is
defined, vertical and fractured wells may be individually specified as fully penetrating or of
limited entry. Once the geometry of the problem is defined, the display of the original bitmap
is turned off and the 2D Map displays a vector description of the problem:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p380/558
Fault polylines may also be used to delimit composite zones where separate mobilities and
diffusivities may be defined. Additional composite zones may also be added around each well,
the figure below illustrates this with two radial composite wells and a different western
reservoir compartment.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p381/558
The model can display the automatic gridding, adapted to honor the reservoir contour, inner
faults and wells:
The default is recommended but specialists may modify the basic grid geometry, size, main
directions, and the local grid refinement around each well:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p382/558
A Voronoi cell is defined as the region of space that is closer to its grid node than to any other
grid node. A key property of the Voronoi grid is that the contact segment (also called contact
face) between two neighboring cells is the bisector of the segment linking the cell nodes.
The Voronoi grid is closely related to the Delaunay triangulation. In fact the Delaunay facets
are obtained by linking the Voronoi cell nodes together. The most important geometrical
property of the Delaunay triangulation is that each triangle verifies the empty circle condition,
i.e. each triangle’s circumcircle does not enclose any other point of the point set:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p383/558
Segment module introduced in order to respect constraint lines such as the reservoir
contour and faults.
Well modules, which are radial for vertical wells but present more complex shapes for
fractured (2-D), horizontal and limited entry wells (3-D).
A short description of how to build a complete Voronoi grid is presented below. We will
however not detail here all steps that are necessary to do so. In particular we will skip the
somewhat tedious treatment of interference between modules.
Everything starts with a vector representation of the problem and particularly with a contour,
wells and inner boundaries:
Fig. 10.C.4 – Vector representation of the problem: contour, faults, vertical wells
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p384/558
The base module is built as in the figure below. In this figure the points that would be
overlapping with other modules have already been removed.
The radial modules are built around the wells. Fig. 10.C.6 shows the new modules truncated by
interference and the nodes remaining after processing:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p385/558
The segment modules are built around the faults and the contour. In the diagram below the
corresponding nodes remaining after processing:
The corner modules are built. They constitute a local refinement that ensures that the shape of
the Voronoi grid will exactly follow the angles of the contour and inner fault polylines.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p386/558
The figure here shows the final superposition of all modules leads to the representation below,
in which the main interferences are indicated (S=segment, R=radial, C=corner):
Finally, the triangulation is performed and the resulting Voronoi grid is shown here:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p387/558
This result is to be compared with the composite image below showing the specific influence of
each module on the final result:
Fig. 10.C.12 – Vertical well gridding: 2D view and 3D display of the highlighted (gray) sectors
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p388/558
Fig. 10.C.13 – Two possible gridding modules for a fractured well: elliptic (left)
and pseudo-radial (right).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p389/558
As before, the well module must be tri-dimensional. The resulting unstructured 3D module is
obtained by combining the limited entry with the fully penetrating fracture module. Hence, the
defining parameters are now: Rmin, Rmax, number of sectors, Ndx and Nz.
The well module is 3D in order to capture early time spherical and vertical radial flows. The
resulting 3D unstructured module may be distorted if vertical anisotropy is introduced. It is
controlled by the following parameters: Rmin, Rmax, number of sectors and Ndx.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p390/558
Fig. 10.C.16 – Horizontal well module: 2D display (the 3D module is highlighted in gray),
3D view and vertical cross-section (the well is highlighted in red).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p391/558
One typical application of this is numerical solutions for gas flow in which pressure dependence
of the gas properties is, in part, handled by the classical pseudo-pressures. In which case the
numerical simulator is solving its system of equations for m(P), instead of P. This approach
allows us to easily obtain solutions that would be out of reach to analytical solutions (the two
figures below). Even more important, in some situations the numerical (linear) simulator
provides a solution faster than analytical solutions on equivalent geometries, considering its
simplicity and the somewhat complex form that the later can take.
Fig. 10.D.1 – Circular reservoir with two parallel faults, and its corresponding numerical
type curve. One can see the transition from linear flow, due to the faults,
to pseudo-steady state flow that is due to the circular boundary.
Naturally, in this application the solution obtained numerically is based on the usual
assumptions and limitations for analytical solutions: single phase flow of a slightly
compressible fluid, constant rock properties, etc. Hence it will lead to the same
approximations and possibly the same errors:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p392/558
Fig. 10.D.3 – Buildup following a production period. Both solutions are numerical,
but the linear curve (in red) does not take gas material balance into account,
whereas the full nonlinear solution (in blue) does so: it is in agreement
with the average pressure (in green).
10.D.2 Formulation
The numerical resolution of the partial differential equations ruling fluid flow in reservoirs
consists in replacing those equations by difference equations where space and time are
discretized. We have seen in the previous paragraphs how space is discretized into grid cells
and grid nodes. Time discretization implies that the simulator will not provide a continuous
solution (As, say, the cylindrical source solution computed analytically for a vertical well in an
infinite reservoir), but will instead provide a discrete series of pressures P at given time steps
{t1, …, tN}.
When the reservoir (e.g., rock compressibility, permeability) and the fluid parameters (e.g.,
viscosity) can be considered constant in time the problem becomes linear: the reservoir state
at time ti does not depend on the parameters we are looking for, that is usually the pressure.
This simplification reduces quite considerably the complexity and CPU time needed to
numerically solve the given problem.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p393/558
k
V P
div V
t
Considering gridding of the reservoir, the material balance of the cell (i) is expressed at a
given time as:
ei Tij.ij.Pj Pi Vi.i qi
jJi t Bi
Where ei is the material balance difference in the cell (i), J i all cells connected to cell (i), Vi the
cell volume, and ij the transmissivity coefficient at connection (ij). The additional term q i
refers to the eventual presence of a well in cell (i).
If we assume that the coefficients in ein+1 do not depend on the pressure the equation above
can be written in terms of F(P)=0, with
Iterative solver methods (BiCGStab, GMRES, etc…) are coupled with matrix pre-conditioners
(ILU, SSOR, etc…) to solve this linear system for P.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p394/558
Fig. 10.E.1 – Numerical solution for a radial composite system in a closed circular reservoir.
The outer zone (in yellow) presents a higher mobility.
Fig. 10.E.2 – There is now one more composite zone, for which there is no simple
analytical solution for this (unrealistic) geometry.
Composite zones are particularly useful to model a transition between fluids such as water
around an injector. But from the simulator point of view it is just a matter of different rock
properties assigned to the grid cells.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p395/558
The representation of petrophysical properties displayed in the above figure is the final result
of a two-dimensional interpolation of input data which are represented by the black data
points.
Inverse distance weighting, where the power parameter n (typically, n=2) is the power to
which each distance is elevated in the weighting process. The smaller n, the smoother the
final interpolated surface.
Linear interpolation, where the surface is assumed to be linear by parts. In this case the
interpolation is performed on the Delaunay triangulation.
Kriging, where the interpolated surface is obtained from a variogram (inverted covariance)
fit on the data points.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p396/558
The three options along with their leading parameters permit a wide spectrum of possible
interpolation results, illustrated here:
Fig. 10.E.4 – Four different interpolated fields obtained from the same input (porosity) data.
From top-left to bottom-right: inverse distance weighting (n=2), linear interpolation,
kriging with a range of 1500 ft, kriging with a smaller range of 750 ft.
The data subject to interpolation may be classified into two categories: the data that affects
the reservoir geometry such as horizons and thickness. And the data describing the fluid and
rock properties. Concerning the reservoir geometry, only the layer thickness presents a
significant importance in well test analysis, since gravity effects are usually neglected. With the
fluid and rock properties, the main data of interest are usually porosity and permeability. The
interpolated fields for the two last parameters are radically different in one respect: the
average reservoir porosity can be easily calculated from the interpolated field values, however
there exists no rigorous approach to compute the average permeability equivalent to a
permeability field such as the one shown in the figure below.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p397/558
In fact, the best approximation consists in evaluating the effective average permeability as a
result of the numerical simulation. In such a case the pressure match line on the loglog plot is
no longer an input but a simulation result. This match line may still be moved up and down if
one introduces a ’fudge factor’ k/kfield, i.e. a global multiplier applied to the entire interpolated
permeability field when a simulation is performed.
Fig. 10.E.5 – Permeability field and corresponding simulation: although the tested well is in
a 50 mD ‘channel’ (the red zone in top figure), the effective simulation permeability is more
than 40% lower (when comparing the pressure match line to the 50 mD blue IARF line).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p398/558
The reservoir cell characterized by a pore volume V and a permeability k is split into a fissure
sub-cell f with a pore volume Vf and a matrix sub-cell m with a pore volume Vm. Vf and Vm are
computed from the storativity ratio :
V f V . .
Vm V . .1
Where is the porosity. The transmissivity Tfm between the fissure and the matrix sub-cells is
defined using the interporosity flow parameter :
V
T fm k ..
r2
Where r is a characteristic length of the matrix-fissure interface.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p399/558
The numerical simulation of double porosity behavior may be combined with more complex
geometries and heterogeneities, hence providing far more flexibility in terms of problem
definition than can be found in analytical solutions. It is even possible to consider the
combination of different defined double porosity within different composite zones, as shown in
here:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p400/558
To sum up, horizontal and vertical anisotropies affect the grid construction phase but the other
simulation steps such as the matrix system to be solved, are not affected. This simple
treatment is only possible when the anisotropy is constant, that is, when the grid may be
entirely deformed using a unique anisotropy tensor.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p401/558
The reservoir geometry is somewhat restricted with this configuration: the layers cannot be
disconnected and the layer horizons are constant. In fact this limitation is acceptable as long
as single phase flow without gravity effects is simulated. Other than that, the layered system
may be commingled with no hydraulic communication between layers, or there may be some
crossflow between layers, as shown below:
Fig. 10.E.13 – Pressure field around a well producing in a four layer system, without and with
vertical crossflow between layers. (left and right respectively). The top and bottom layer
have been defined with equal permeability. Middle layers with lower permeability.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p402/558
Multi-layered systems give more degrees of freedom concerning the well perforations. Instead
of perforating the well over the complete reservoir it is possible to open individual perforations
of each layer (figure below). Note however, that with the 2 ½ D gridding used the early time
spherical flow that should theoretically appear due to the partial completion does not do so, as
the specified vertical permeability in the model only applies at the interface between the
layers. The layers themselves are isotropic.
Fig. 10.E.14 – Pressure field around a well producer in a (commingled) four layer system.
The bottom layer is not perforated (compare with Fig. 10.E.13).
In multilayer simulations it is also possible to compute and output layer rates at the wellbore,
i.e. individual layer, bottom hole, contributions. This output facility is particularly useful in
identifying potential crossflow in the wellbore during shut-in periods, as displayed
below.
Fig. 10.E.15 – Three layer system, layer contributions simulated at the well during a build-up
following a production period. Because the pressure drop is less important in the middle,
high permeability, layer at the end of the production, crossflow occurs in the wellbore
until the pressure reaches equilibrium.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p403/558
If we recall the discretized material balance equation in the cell (i) over the time interval
n
t = t , t
n 1
obtained in section Handling Linear Problems (Basic):
jJi t
B q
ein 1 Tij.ijn 1.Pjn 1 Pi n 1 Vi
i
Bi
n 1
i
i
n
n 1
i
The system F ( P) 0 is now non-linear because the components in the ein+1 term depend on
pressure. It can be solved with an iterative Newton-Raphson method, in which a linearized
approximation of the solution for the non-linear iteration can be written:
F l 1
F l 1 F l J .P where J is the Jacobian matrix and P ( P P ) .
l
P
The linearized system Pl 1 Pl J 1.F l can now be solved for Pl+1, using the same techniques
as in the linear case.
At each time step, the Newton-Raphson algorithm iterates until an acceptable minimum is
obtained. This will happen until the next iteration point gets too close to the current solution.
In this process the CPU time clearly depends on the precision sought, as can be seen in the
table below.
Number of linear
Material balance error (%) CPU ratio
iterations/time step
The above numbers were obtained simulating a single gas well producer in a rectangular,
homogeneous reservoir. Various runs were compared; a linear case where the simulator solves
a problem made linear after the introduction of pseudo-pressures, and four non-linear cases
with increasing restrictions on the non-linear loop convergence criterion (the ‘residual’). The
non-linear loop introduces only a small overhead in this example because the problem remains
only slightly non-linear. More so because the dependency of the gas PVT on the pressure
variable remains relatively smooth. It will be shown in the sections below that the additional
CPU time can be far more important when strong non-linearities are introduced. It can also be
demonstrated that forcing the convergence criterion to very small values is particularly
useless. The only effect being to make the CPU time explode, for a slight gain in precision
before reaching the limit of the machine.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p404/558
Darcy’s equation is only valid for laminar flow but in some situations fluid flow in the reservoir
can become turbulent. For example, in a reservoir in the vicinity of high rate gas producers. In
fluid mechanics the transition from laminar to turbulent flow is characterized by the Reynolds
number Re:
Laminar flow occurs at low Reynolds numbers (Re<2000-3000), in which case fluid flow is a
constant and smooth fluid motion. When inertial forces (Re>4000) dominate fluid flow
becomes turbulent, producing random vortices and other fluctuations.
Because turbulence usually occurs in the vicinity of the wells a classical way to handle
this involves introducing a pseudo-skin parameter D so that the total skin S’ becomes:
S ' S 0 Dq , where q is the well flowrate. This simplified approach can of course be used as-is
in numerical simulations. However numerical simulation makes it possible to incorporate the
non-Darcy flow effects at an upper level, through the introduction of the Forchheimer factor
in a generalized Darcy’s equation.
P u
k
P u u u
k
In which is the Forchheimer factor. has the dimension L-1, that is m-1 in SI system. If we
introduce the notions of Darcy velocity uD (grossly speaking, the velocity given by laminar
flow) and non-Darcy velocity u, we can obtain the following relationship:
2
u f ND
uD with f ND
1 1 4 k uD
Where k is the permeability, and =/ is the inverse of the kinematic fluid viscosity.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p405/558
In fact it can be shown that both approaches (pseudo-skin and Forchheimer factor) are
equivalent when the kinematic fluid viscosity ( /) is held constant. In practice, both
approaches provide results that are comparable:
Fig. 10.F.1 – Non-Darcy flow around a gas producer, successive buildups following constant,
and increasing, rate production periods. The crosses and dots are simulated
using the Forchheimer equation, whereas the plain curve is obtained using
an equivalent pseudo-skin factor D.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p406/558
However there are situations in which the pseudo-skin factor D leads to a somewhat
incomplete modeling of the problem and it is better to replace this by the generalized Darcy
equation.
In that case relative permeabilities must be introduced and total mobility becomes:
k k p kw k (S ) k (S )
k . rp w rw w
t p w
p w
Where p refers to oil or gas and Sw is the water saturation. Contrary to the assumptions that
have to be made in analytical methods (Perrine), the water saturation is not held constant in
time or in space. Relative permeability curves define regions of phase mobility, as shown in the
figure below.
Fig. 10.F.2 – oil (green) and water (blue) relative permeability curves. In the saturation region
(A) water is not mobile and maximum oil relative permeability is reached.
In region (C) oil is not mobile.
Hence, the stabilization of the derivative on the loglog plot depends not only on the bulk
permeability, but also on the dynamic phase mobilities induced by the distribution of fluid
saturations.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p407/558
Let us consider the example of a water injector in an oil reservoir (following figure):
During the injection phase the pressure match line is based on the total mobility with
maximum water saturation since we inject only water.
The fall-off however sees different effective permeabilities induced by varying water
saturation. This is at a maximum close to the well and a minimum further away in the
reservoir.
The effective two-phase mobility can take values in the range of ‘minimum water
saturation (the green line)’ and ‘maximum water saturation (the blue line)’.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p408/558
In an isotropic, linear, elastic reservoir we model the rock deformation through a relationship
between porosity, permeability and the average internal stress m:
0 .1 a m
k 0 .1 b m (1)
k k 0 . exp b m ( 2)
n
k 0 .
(3)
0
Where k is defined by the equation (1), (2) or (3) depending on the rock type. In the absence
of overburden isostatic pressure, we introduce the following relationship between m and the
pressure drop Pi-P:
1
m ( x y z ) ( Pi P)
3
Hence, k and are directly related to the pressure drop P i-P.
Porosity and permeability reduction may or may not be irreversible, in the sense that the
model can authorize (or not) the model parameter to recover its initial value when the
pressure drop decreases (see figures below). The effective permeability is lower here during
the two production phases (blue and red), and reverting to its initial value during the build-
ups, as the model permeability is reversible.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p409/558
Introducing pressure dependency on porosity and permeability renders the problem highly
nonlinear. This makes it awkward to use and to interpret considering the difficulty in gathering
valid data to build the (P) and k(P) curves, and the strong dependency of the simulated
pressure to these curves. A 5% error on k(P) can easily induce a 50% error on the simulated
pressure.
With the simulation of three-phase flow problems we reach the limits of our current capabilities
in terms of numerical well test modeling. The ruling equations and the solution methodologies
are well known and in essence three-phase flow is merely an extension of the dead oil and
water or dry gas and water problems we have already described. Unfortunately one
unexpected problem arises, as can be seen in the figure below.
Fig. 10.F.7 – Simulation of a producer in a saturated oil reservoir: left: the gas saturation
at the end of the production, right: the resulting simulation on a loglog plot.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p410/558
In the illustrated example, the erratic behavior of the semilog derivative is due to the
successive appearance of free gas, cell ring by cell ring, around the well as the depletion
occurs and the cell pressure drops below the fluid bubble point.
This appearance is not controlled by the relative permeabilities (free gas can appear before it
reaches a saturation high enough to be mobile) but by thermodynamic equilibrium. Successive
cell rings can be flooded by free gas within a few time steps, hence changing considerably the
effective permeability, and thence inducing the erratic behavior on the loglog plot.
Unfortunately, changing the grid cell size is not sufficient to get rid of the problems. This just
changes the frequency of the humps and bumps on the loglog plot.
Fig. 10.F.8 – Throughout its production history the well pressure must remain
above 14.7 psia (‘pwf min‘) and below 7900 psia (‘pwf max’).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 10 – Numerical models - p411/558
When the bounding pressure values are reached, the well is switched from rate-controlled to
pressure-controlled production. It may switch back again to rate-controlled production when
the simulated pressure re-enters the permitted limits. Such a case is shown in Fig. 10.F.9:
during the first production phase the well pressure reaches a minimum and switches to
constant pressure production. Later it reverts back to variable rate production.
Fig. 10.F.9 – The minimum bounding pressure is reached during the first production
Fig. 10.F.10 – This water injector reaches its maximum pressure (orange dotted line)
and is then switched to constant pressure injection. Later, the injection target rate
is lowered and the well can switch back to constant rate injection.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p413/558
11.A Introduction
In a standard test we put the well in production and we monitor the pressure build-up. We use
the single total production rate in the same well to correct the pressure model by
superposition. As long as we neglect the difference of fluids, injection / fall off tests can also be
considered standard tests. Special tests are just whatever is not standard. In this version of
the book we will detail four different types of special tests.
Formation tests
When looking at the pressure response at the producing probe or packer, a formation test can
be considered to be a standard test using a very specific well geometry. The methodology will
be strictly the same. The difference will be when using multi-probe tools, where one has to
integrate the standard pressure response of the producing probe and the different vertical /
lateral interferences observed at the other probes. This is where the methodology and the
models will diverge. In fact we are today at the starting point of KAPPA’s specific work on
formation tests. Later developments (Generation 5) will include a specific software module,
and probably a dedicated chapter in this DDA book.
Slug tests
In most DST’s the early part of the well production will see no flow at surface and a rise of the
liquid column. This will proceed until the liquid level eventually reaches the surface. ‘Slug’
pressures vs. time will be used to quantify the sandface production, which then can be used as
the input rate history for the build-up analysis. However there are some techniques that allow
the transient pressure response of the slug to be analysed independently. This specific
processing will be demonstrated in this chapter.
Multilayer tests
One will perform a standard test in a multilayer formation if we produce the whole reservoir
and just acquire the pressure and total producing rates. There are analytical models in Saphir /
Topaze which will account for layering, such as the double-permeability model. We will call a
multilayer test an operation where we will adjust the test sequence or add measurements to
discriminate the different layer contributions. This can be done by producing the layers
successively or in different combinations, or by adding production log measurements.
Interference tests
In any test we can add the interference of the other wells in the Saphir analytical and
numerical models. Despite this model refinement we will still consider that we are in a
standard test if the reference production history and the pressure are taken at the same well.
In a proper interference test the pressure that we interpret was acquired in an observation
well, i.e. not in the well from which the reference rates were recorded. This will generally imply
that the observation well is not producing and that most of the amplitude of the observed
pressure change is, precisely, this interference.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p414/558
Both tools are illustrated in the schematic below with the leading parameters of a conventional
or formation tester specific PTA model indicated.
Fig. 11.B.1 – Multi probe WFT tool Fig. 11.B.2 – Multi packer WFT tool
The probes of the probe tool will actually penetrate into the formation and is usually sealed off
with a pad/packer.
Both tools can have multiple pressure and temperature ‘observer’ probes spaced in the tool
sting vertically. The probe tool can also have an observer probe at any angle with respect to
the active probe and at the same level as the active probe; this is illustrated in the above left
figure where an extra observer probe has been placed in the tool string at the opposite side of
the active probe (180°). The advantage of the multiple probe tools is that it allows the
detection of vertical communication between the active probe and the observing probes.
Multiple probes can also sometime allow the differentiation of changing anisotropy.
The tool is designed to control the withdrawal of fluid from a probe or between packers. The
measurement of pressure and the sampling of formation fluids involves the withdrawal of fluid
through a probe penetrating through to the formation, or a pad set on the formation wall and
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p415/558
thus induce a ‘drawdown’ in the vicinity of the probe (pad) as fluid is being withdrawn into a
sample chamber or is pumped through by the tool. This period is followed by a ‘shut-in’ period
to let the pressure build up to static conditions for the measurement of the ‘virgin’ pressure of
the formation at this fixed depth. The transient pressures are increasingly used for
interpretation of reservoir parameters, mainly permeability. Early permeability estimates
promote greater confidence in test design or in the decision process of whether further well
testing can be skipped or not. The use of formation test straddle packers for DST-type tests is
also increasingly being used.
Besides yielding a value for permeability, single probe data can identify or confirm thin flow
units within reservoir sections, where it may not otherwise be clear that vertical barriers are
present. The ultimate goal of such formation tests is to reduce the use of conventional well
testing with all the downhole and surface hardware this require and thus increase safety by
avoiding any flow of live reservoir fluids at surface. Such tests will also complement
conventional well testing when testing every zone or layer is not possible, and thus enable
more detail in the vertical reservoir characterization.
Mud pressure, the hydrostatic pressure of the fluid in the well before the probe/pad or the
packers is set.
Sampling, the pressure recorded as the fluid of the formation is allowed to flow or being
pumped to or through the tool.
Mud pressure, the pressure returns to the hydrostatic pressure of the fluid in the well after the
tool has been unset.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p416/558
In flow regime ‘2’ there is a vertical contribution to flow, and if the perforated interval is small
enough, which it invariably will be using a small probe of limited dimension, a straight line of
slope –½ (negative half slope) will develop in the Bourdet derivative, corresponding to
1
spherical or hemi-spherical flow. The pressure is then proportional to .
t
The relation of the pressure change and the ‘one over the square root’ of the elapsed time is:
70.6qB 2453qB ct 1
p
ks rs ks3 / 2 t
With: k s kr kv
2
1/ 3
1 2453qB ct
p m where m
t ks3 / 2
dp dp m 1 1
p ' t t m t p
d ln( t ) dt 2 t 2 2
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p417/558
The characteristic flow regime is spherical flow until upper and lower vertical flow boundaries
have been reached and then followed by classical radial flow, IARF in the reservoir.
The interesting flow regime here is spherical flow during which the pressure change is
1
proportional to , the Bourdet derivative will follow a negative half unit slope straight line.
t
kv
From this flow regime it is possible to determine the anisotropy .
kr
The formation test option allows the inclusion of any number of observation probes in the test
setup and the study of the vertical response to the disturbance caused by the active probe.
It is possible to load in one session one active probe pressure measurement and as many
observation probes pressure channels as necessary.
The rate history can be loaded as classic flowrate versus time (downhole or surface rates) but
it is also possible to load downhole pump or chamber volumes and convert this to the
appropriate rate history within the module.
Once the data is loaded the classic workflow of conventional pressure transient analysis is
followed. The extract option can extract all gauges, active and observers, simultaneously.
Prior to the extraction the interpreter will define the tool configuration and assign each
pressure channel to its appropriate probe.
The next figure illustrates the loglog plot of the buildups recorded, after tool pump through, of
the pressure response of each probe.
We can see that the active probe and all the observers appear to have a common stabilization
at later time thus indicating common kh. The active and the nearest observer probes have
largely a common derivative behavior indicating no vertical changes of the reservoir
characteristics. The observer probe furthest away from the active probe is recording close to
its gauge resolution.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p418/558
There is no model catalogue as such in the WFT module. The model is either for the probe or
packer configuration of the tool. We illustrate the two model dialogs below.
Fig. 11.B.6 – Model dialog probes Fig. 11.B.7 – Model dialog packer
The well can be vertical, slanted or horizontal and the reservoir homogeneous, double porosity
or composite. It is also possible to combine the models with outer boundaries.
Below we illustrate the final model match with the WFT probe model with no anisotropy.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p419/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p420/558
The downhole pressure recording of a typical DST is shown in the figure below. During this
operation the well is open and closed with a downhole valve. The pressure sensor is positioned
below the downhole valve to follow the reservoir pressure when the well is shut in.
We start at initial pressure pi. Before the well is open downhole an initial static column of fluid,
typically a cushion of diesel or water, may or may not be placed above the downhole valve.
During this operation the surface valves remain open to flow to bleed the air.
The test starts by opening the downhole valve. The pressure instantaneously switches from p i
to the initial static pressure p0. If the well is not flowing the pressure will remain stable or will
change very little, and the test will be over.
If we have a producer and if p0 is less than pi the fluid starts to flow from the reservoir to the
well, the static column rises and so does the pressure. This is the beginning of the slug test. It
will last until the fluid eventually reaches surface. The well is shut in and we have our first
buildup. While the well is shutin the static column will remain stable, but it will not be recorded
by the pressure gauge which is on the other side of the valve.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p421/558
When we open the well again we restart at the previous level of the static column and the slug
resumes. At this stage several scenarios can happen:
- The system does not have enough energy and the slug dies. Throughout the test there is
no flow to surface.
- The system has enough energy and the fluid breaks at surface. At this stage the pressure
starts to decrease again and we are back to the conditions of a standard test. After
stabilization the fluid is flowed through the separator, surface rates and samples are taken
until a final buildup is run. This standard scenario is the one shown in the figure above.
If we cut the slug sequence into a discrete series of intervals we get the following equations:
The rates are calculated using the internal diameter of the effective well and the fluid density
to obtain a wellbore storage coefficient that is then used to calculate the rates.
The figures below show the buildup loglog and history match. The trick here is to use a
changing well model and the changing wellbore storage option. There are two statuses; the
first during the slug will have a wellbore storage of 0 and the buildup whatever is appropriate.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p422/558
In a standard test we have two separate sets of data: the rates and the pressures. In PTA one
will try to interpret the pressure response using the rate history to correct for the
superposition effects. In Production Analysis it will do more or less the opposite. What is
puzzling at first glance in slug test analysis is that the pressure, by differentiation, also gives
the rate. So here we have a single transient signal that is supposed to give all information at
the same time, and from which we are expected to make an analysis. How come?
Mathematically, a slug is the answer to an instantaneous production with wellbore storage. The
volume of the instantaneous production corresponds to the initial static column, i.e. the
internal cross-section of the tubing at the liquid interface multiplied by the height of the
difference between the initial static column (psf=p0) and a virtual stabilized static column
(psf=pi). This is virtual as the height of the stabilized column could exceed the internal volume
of the completion if the well / reservoir system has enough energy to flow to surface.
So to make a parallel with a standard test the pressure remains the pressure but the rate is
replaced by the value of this virtual volume.
The slug test is mathematically equivalent to a close chamber test, even if the volumes are
calculated differently. The hydro-geologists have been doing this sort of test for a long time. In
the very low permeability formations for nuclear waste disposals, they pressure up a fixed
volume of water and watch the relaxation. They call these tests impulse tests, which should
not be confused with the trademarked impulse test by Schlumberger.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p423/558
In 1975 Ramey et al published a set of loglog type-curves showing the slug test responses, in
dimensionless terms, for various values of the usual dimensionless parameter CDe2S. These
type-curves were starting with a horizontal behavior and finish with a negative unit slope. The
pressure data would be matched on the type-curve after the following transformation
pD (t )
pi p(t )
Ramey’s slug transform:
pi p0
In 1995 Houzé and Allain had to adapt Saphir for ANDRA (French nuclear wastes agency) in
order to implement hydro-geologist’s impulse tests for very low permeability formations. This
is when they realized that multiplying the 1975 Ramey’s type-curves by t we would get… the
dimensionless derivatives (in red below) in the 1983 Bourdet derivative type-curves.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p424/558
So, in order to match the analytical models the modified Ramey transform will be:
pi p 3389C
Modified Ramey transform t p D (Tmatch t )
pi p 0 kh
Where p’D is the Bourdet derivative of the corresponding dimensionless model. Another way to
present this (initially surprising) observation is that:
After the modified Ramey transform, the pressure response to an instantaneous (Dirac)
production matches the derivative of the constant rate response for the same system.
Fig. 11.C.8 – Typical duration Slug: Fig. 11.C.9 – Extended duration Slug:
Loglog plot Loglog plot
To perform the analysis the values of pi and p0 must be input. The wellbore storage coefficient
can be calculated from the tubing diameter and the wellbore fluid density. It is assumed that
the inner diameter is constant. If changes occur there will be a corresponding change in the
wellbore storage coefficient.
With a fixed storage and a loglog response like the one displayed above on the left one can see
immediately that it is not possible to get both skin and kh with any confidence from the
analysis. This is illustrated in the figures below with two different diagnostics. In the fist we
have the skin set to 0, in the second the skin is set to 10. A regression was run on kh for each
case and the values are respectively 1740 and 4610 mDft.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p425/558
In the worst case we can only establish a relationship between the permeability and skin.
However, in the oil industry most slug tests are followed by a downhole shutin. The pressure
slug can also be used to calculate the downhole flowrate as we have illustrated in a previous
section, as long as we have a reasonable idea of the fluid density. It goes without saying that
we know the hardware we have put in the well, so we know dimensions such as internal
diameter and if there are any diameter changes in the well. We also know what fluid has been
put in the pipe above the tester valve, so P0 is known and acts as a check.
The figures below show the slug match versus the buildup match. Due to the length of the
buildup, the pressure and time match becomes ‘unique’ with high confidence in analysis
results.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p426/558
11.D.1 Introduction
Most reservoirs have a commingled production from multiple zones. These zones may or may
not be from the same geological formations, and they may or may not be communicating at
the level of the formation. In most cases we perform a Pressure Transient Analysis considering
the reservoir as a single global contributor.
However we want to get a better understanding of the vertical distribution of the production,
hence the contribution and characteristics of individual layers, or selected groups of layers. We
will execute a multilayer test and perform a multilayer analysis. The challenge is to see if we
can discriminate the layers with the standard information we have, or to add measurements
and/or specific test sequences to allow this discrimination.
Analyzing a multilayer reservoir may be eased if there is a ‘multilayer behavior’ and the
pressure response that cannot be matched with a homogeneous model. However this is not
required. Even multilayer formations showing a global homogeneous behavior in terms of
pressure-rate response can be subject to multilayer tests and analyses if we decide to add
measurements or adjust the test sequences in order to isolate individual contributions.
Analytical models: The first and most common model is the double-permeability
reservoir, which has now been around for thirty years. It has been extended to three, four
and more layers, and recent developments can model the response of complex slanted and
multilateral wells, all allowing crossflow between the layers.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p427/558
Numerical models: A multilayer numerical model with cross flow in the reservoir can be
easily built by defining vertical leakage governing the flow at the interface between each
layer. The anisotropy defined here is not the anisotropy of the layers but of the semi-
permeable wall between the layers.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p428/558
The typical analytical model used to perform such standard analysis is the double-permeability
model, which assumes that the reservoir is constituted by two layers that may or may not be
producing to the well, and that may or may not be communicating at the level of the reservoir.
The simplest homogeneous reservoir model will have 3 parameters: C, kh and skin. The
double-permeability model will have 7: C, total kh, skin to layer 1, skin to layer 2, (porosity
ratio), (cross-flow connectivity) and (permeability ratio). This is described in details in the
Reservoir Models chapter.
There are more complex analytical models available that model layered formations: three
layers, four layers, and generic models allowing the combination of as many comingled layers
as we want with separate well, reservoir and boundary models. However in a standard test we
match models on a single, global Bourdet derivative response. Matching and identifying the
seven parameters of the double-permeability model is already an act of faith, and will already
require strong contrast between layer properties to show any specific behavior. Matching even
more complex models without any strong information to constrain them is simply ridiculous.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p429/558
The production logging data analysis can give access to rate versus depth logs, therefore to
the individual layer rate and contribution.
Layer rate: the production rate measured above a layer, it is the sum of the production
of all the layers below.
Layer contribution: the individual production of a single layer.
The objective is to acquire and observe the transient behavior of the layer rate, and then to
intent to analyze it to get the individual properties by:
Either, analyzing globally the layer system and describing it in detail with a multilayer model
matching
Or processing the data in order to extract the individual behavior and then to analyze
separately the various layers using a classical single layer model.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p430/558
The objective of a Multilayer Test is to gather the maximum information in a single well test
therefore the design must offer all necessary circumstances to a complete acquisition.
The base of a multilayer test design is a multirate test, and the sequence of PLT operations
allows the acquisition of both Static measurements of transient rate versus time and
measurement of stabilized pressure.
A first step is to define the possible Static survey points, according to the openhole logs and/or
the perforation. At this level logic and realism are the key points for a necessary simplification:
Then to program the operation to acquire static and versus depth measurements, i.e.:
This sequence includes the stabilized rate versus depth acquisition under various flowing and
shut in conditions and also the transient rate static acquisition at the key points under various
rate changes.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p431/558
Both versus depth and versus time acquisition can be used, but each one presents its own
interest and provides certain information with more or less success:
The rate versus time acquisition measures the transient rate behavior, its analysis gives
better information on early time behavior (well conditions) and for late time behavior (limit
conditions). Its weakness: it is easily subject to the noisy acquisition conditions.
The rate versus depth acquisition measures the stabilized rate. It is more adequate for the
IARF period (permeability and skin determination). Its weakness: it is a constant value
ignoring any unstable behavior.
The model optimization is global and aims the adjustment of the complete set of parameters.
This method, compared to other methods, sequential and iterative, presents the advantage not
to accumulate the uncertainty at each step: when analyzing layer 1, then 1+2, then 1+2+3...
The field data analysis presents several difficulties and pitfalls that can be avoided by following
an adequate work flow.
The pressure data must be first corrected at the same depth when the transient
behavior was recorded at several depths.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p432/558
The stabilized rate and/or contribution data must be calculated with the adequate
software from the Production Logging measurements.
The transient rate data must be extracted and set in a proper format files then
synchronized with the pressure data.
Note: the pressure and rate model matching method used for this multilayer analysis does not
requires specifically the rate data to be under the ‘rate’ or the ‘contribution’ format since, the
model is able to provide the simulated individual production under any format.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p433/558
The total kh value will be kept in the rest of the analysis as a constraint for the global
regression.
The static pressure, found at 5869 psia will be also kept in the continuation of the
analysis as a constraint as long as no ‘layered pressure’ hypothesis is considered.
The wellbore storage value will also be kept since it is a well characteristic.
The limit diagnosis is here a single fault at around 500 ft.
The first multilayer model parameter definition is, by default uniform permeability, equal skin
and boundaries conditions are taken identical to the standard analysis results.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p434/558
The result of this first run confirms the global pressure behavior:
But the non-matching simulated layer rates and contribution reveals clearly the permeability
contrast consequence on the individual contributions:
This has to be done on the complete pressure history and by selecting the stabilized rate
values adjustment as objectives.
For the sake of the efficiency it is absolutely compulsory to select the most adequate
regression point on the history plot, taking care to eliminate the ‘abnormal’ pressure behavior
like phase segregation effects or operational errors, i.e.:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p435/558
The effect of the nonlinear regression on the contribution and rate matching is immediate:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p436/558
The quality of the pressure match on the loglog and history plot is maintained:
The same process is used, with the nonlinear regression, to adjust the limit distance:
The permeability, the skin, the wellbore storage and the initial pressure values are
removed from the regression parameters list.
More weight is given to the transient rate measurement matching.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p437/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p438/558
It would be presumptuous to pretend that this solution is unique since we are looking for the
minimum of three values (the average distance between simulated and measured pressure
and rates) in a 12 dimensions space. This complex space certainly contains multiple areas
where the searched minimum is approached (local minimum), and a unique one which is the
absolute minimum.
It is certain that other sets of slightly different parameters could give equivalent satisfaction in
term of pressure and rate matching, but any result has to be guided and eventually chosen
according to other external information coming from geology, geophysics or just the logic.
This method remains, by its principle and its flexibility, a perfectly adequate tool to help us in
this search and to check the validity of the results.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p439/558
C. V. Theis developed the interference test for hydrogeology (1935). Johnson, Greenkorn and
Woods devised the pulse test and Kamal and Brigham perfected and developed a special
analysis procedure using type curves to facilitate the analysis by hand. None of the old
analysis techniques are relevant today, thus we will not repeat what is already abundant in the
old literature on well testing but concentrate on the modern methods of analytical and
numerical modelling.
Interference testing involves at least two wells but the test can easily be extended and
analysed today with any number of wells present in the interference test scenario. Simply put,
the interference test involves the withdrawal or injection of (reservoir) fluids from/into one
well while measuring the resulting response in another. So we have the concept of an “Active
well” and an ‘Observation well’. It is the well response in the ‘Observation’ well that is subject
to analysis and since this well is (hopefully) at static conditions the concept of skin and
wellbore storage is eliminated. Thus, in the majority of cases, the Line source solution (or the
Theis’ solution) as presented in the chapter on ‘Theory’ can be used directly to analyse the
observation well pressure response.
The solution, at any point and time, for a Line Source well producing a homogeneous infinite
reservoir, is given by the following:
70.6qB 948.1ct r 2
pr, t pi Ei
kh
kt
A typical line source response is displayed in the figures below, on loglog (with the Bourdet
derivative) and semilog scale.
Fig. 11.E.1 – Line source loglog plot Fig. 11.E.2 – Line source semilog plot
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p440/558
As demonstrated in the chapter on ‘Theory’, there is a value of time above which the Line
Source Solution reaches a ‘regime’ where one can use an approximation of the highest interest
to the well test interpretation engineer. It is the semilog approximation and this regime is
called Infinite Acting Radial Flow, or IARF.
The workflow involved in the analysis of the observation well pressure response is thus largely
the same has that of conventional well testing. Both loglog and semilog analysis is used and
can be valid, and the Line source and wellbore storage and skin models are available to
analyse and model the response.
Heterogeneities such as layering, double porosity and permeability and outer boundaries can
also be applied analytically.
The figure below illustrates the response in an observation well at static conditions resulting
from a simple constant rate production of the active well.
As illustrated in the figure above there is a delay (time lag) before the pressure gauge can
start to see the response in the observation well. The loglog plot below is constructed as the
pressure change from time T(t=0) thus it can be seen that the first points before there is an
actual detectable response in the pressure signal has to be ignored.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p441/558
Below is the response plotted in loglog coordinates as the pressure change and the Bourdet
derivative.
kh Permeability-thickness product.
Both are results from the analysis. It is said that the only way to get a reliable value for rock
compressibility is through an interference test. Of course this assumes that you know the fluid
saturations, and the individual fluids compressibility and the porosity. Normally, however, it is
the ct product that is posted as the result with the permeability vector k.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p442/558
And below we illustrate the sensitivity to the permeability thickness product, kh.
Not surprisingly, we observe that the lower the porosity and higher the permeability, the
disturbance caused by the active well travels faster to the observation well. The lower the
permeability and the porosity the higher is the response amplitude in the observation well.
Actually the original leading parameters can be seen in the line source equation and the
exponential function argument:
70.6qB 948.1ct r 2
pr , t pi i
E
kh
kt
ct
On the time scale, determining the time lag, it is the diffusivity group:
k
kh
On the pressure scale, determining the pressure amplitude, it is the transmissivity group:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p443/558
Below we have illustrated the response in the loglog plot of two different cases of wellbore
storage and skin and compared this to the line source solution. The figure to the right
illustrates the increase in time lag and the decrease in the amplitude of the response.
Note that the wellbore storage coefficient value used in this simulation is taken very large to
make the effect more visible. It is certain that a normal wellbore storage value would have a
lot less effect.
In order to keep it simple and in line with what was discussed in the foregoing section on
double permeability, we will only consider a system of two layers with cross flow.
The figure below illustrates the simulated loglog response of a two layered system with the
following characteristics matched with the analytical double permeability model for
comparison.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p444/558
One can also open and close perforations of the layers in the observation well of the example.
If the high permeability and low storativity layer is turned off the pressure gauge in the
observation well will only record the pressure interference through the lower low permeability
layer. One can see the comparison of the response in the observation well with the upper layer
closed and the response with both layers open. One can indeed conclude that the effect of the
active well will induce cross flow from the lower to the upper layer in the observation well. See
the figure below.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p445/558
Fig. 11.E.10 – Comparison of response with dominating interfering layer closed off
The figure below illustrates the response in a vertical observation well caused by production of
a vertical, and two differently configured horizontal wells. The horizontal well at 0° has the toe
pointing to the active well in the X direction and the well at 90° has the heel and toe parallel to
the Y direction.
Not surprisingly, one observes that the response caused by the horizontal active well with the
tip pointing towards the observer is ‘seen’ by the observer earlier than the interference caused
by the horizontal well which points in the Y direction.
Another interesting observation is that the response amplitude caused by the vertical active
well is higher than the horizontal well at 90°, and the amplitude caused by the horizontal well
placed in the X direction, pointing towards the observer is causing the highest delta p.
From this follows naturally, the fact that it should be possible from these observations to
identify areal anisotropy that can also be used in the numerical models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p446/558
Fig. 11.E.11 – Vertical observer horizontal active well: Loglog and history plot
The numerical model will also take care of reservoirs that are not handled analytically.
Composite reservoirs can easily be modelled using the numerical model. Variable data fields
such as thickness, porosity and permeability can be modelled.
Below we illustrate the use of the numerical model on fake reservoir with variable
permeability. The distance between the observation and the active well is some 12500 ft as
the ‘crow flies’, however the actual distance the diffusion has to travel before a response is
seen in the observation well is a lot more tortuous because of the faults. The active well is
producing for a month followed by a two months shut-in of the active well. The observation
well is at static conditions throughout the interference test. It is possible to extract both the
production and the shut-in and display the derivative and the pressure change in the loglog
plot. However it can be immediately be seen in the simulated pressure history that even after
two months of shut-in the effect has not yet been detected in the observation well thus the
loglog plot will be empty and no diagnostics can be done. In fact in this case the diagnostic
plot becomes the history plot and the interpreter will thus just try to match a model to the
history response. See the next section on ‘Interference testing and real life’.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p447/558
The signal in itself must be measurable by the pressure instrument, thus the resolution of the
tool is an issue that cannot be overlooked. This was in particular critical in the past when the
quality of the measuring tools was average and the best pressure gauges were not readily
available, and when they were, at an exuberant price. Today (2010) this is no longer an issue
as most pressure and temperature sensor, at least those based on quartz technology, are
readily available and at an affordable price. Even permanent downhole gauges are today of
such a quality that they can easily be used for interference testing were only minute pressure
changes is necessary in order to detect a signal through the reservoir.
The most common natural signal present in static pressure measurement is the Tide; the
combined effect of the gravitational forces exerted by the Moon, the Sun and the rotation of
the earth. This effect is not only seen in pressure measurements in offshore fields, but is also
common in the middle of the desert or in the middle of a rain forest. To enable the detection of
a manmade signal it must therefore be possible to distinguish this signal from the tide signal.
A pulse test would never be designed in such a way that the pulse period would correspond to
the natural tide period.
The below figure illustrates the tides signal measured in a static observation well during a long
term interference test in some part of the world.
There is no response in the signal other than the tide effect so there is no other external
influence on the measurements. We are ‘seeing’ no interference i. e. there is no ‘break’ in the
signal. To illustrate that we have indeed a ‘break’ we are looking at the same signal but some
three months later on in the history of the measurements. There is definite a ‘break’ that can
be explained by the interference from a nearby injector.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p448/558
Although the ‘break’ is small to start with we can definitely see that there is an interference
response in the well which was caused by a nearby injector when we look at the ‘whole’ story.
The tide signal oscillates with a delta p of about 0.3 psi, the ‘break’ is achieved by a total delta
p of about 1.5 psi.
The following figure illustrates the response in a well where the pressure rises linearly over
time; in this pressure response there is no ‘break’ thus no interference response. In fact the
pressure gauge is ‘drifting’ with a drift of about 0.03 psi/day.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p449/558
The workflow is based upon the same principle as conventional PTA where a diagnostic plot is
generated and plotted in loglog coordinates with the pressure change and the Bourdet
derivative plotted versus the elapsed time. If the workflow is adhered to it is an easy task to
differentiate the shapes and thus the different reservoir and boundary models and deduce all
pertinent parameter associated with this and that model. Therefore the requirement is that
some ‘clean’ periods are built into the test design. By clean periods we mean periods of
constant flowrate, be it shutin, injection or production periods.
Unfortunately experience show that although these ‘clean’ periods may have been built into
the design, they are often too short so there is not sufficient time to see in the pressure
response the actual influence of these ‘clean’ periods, thus any extraction to the loglog plot is
not possible. In real life the production or injection is never totally stable, and the flowrate is
variable, and on top of this, may not have any ‘built-in’ clean periods at all, so the interpreter
is left with an empty loglog plot and is reduced to the task of blindly matching a pressure
history.
The below figure illustrates a typical long term interference test where the active well is just
following the normal operation of production of the active well with all the day to day incidents
that always happens to disturb the this production, such as unscheduled shutins, changes in
rate, perturbations due to normal and daily operation etc. In fact the interference design here
is just the ramp-up of production at the end of November 2001 which we can see has little
noticeable effect on the pressure response in the observation well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p450/558
By artificially adding a clean period to the production history of the previous example, a shutin
at the end, and generating the model to 5000 hrs shutin time (some 200 days) we can easily
observe that the period of buildup has to be a minimum of some 100 days long to give a
reasonable response at the observation well and allow the extraction of a loglog plot for
diagnostic purposes. Not an easy task to sell to the management. See the illustration below.
The only diagnostic we can exercise here is to blatantly state that, yes, there is a response in
the observation well and the reservoir in between the wells is in communication. Using the
history and the simple homogeneous model will (hopefully) give a match, thus a value for
permeability kh, and the storativity product ct. Assigning any parameters to heterogeneities
would be pure speculation.
Very often we are actually only looking for a yes or a no answer. If no communication is
detected this is also valuable information to the practicing reservoir engineer.
Interference testing in producing fields is possible and can be very successful. The times it
does not work it is usually due to the under design of the desired minimum pulse amplitude
and the lack of the induction of ‘breaks’ that can be recognized and distinguished from the
background noise and trend in the observers.
The ideal interference test is of course carried out in a reservoir that is at rest, but we all know
that this would be a costly affair in a producing field....
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p451/558
As high sensitive pressure gauges were both expensive and not readily available, they were
often reserved for this kind of test.
Later high accuracy electronic gauges became common and cheaper and consequently the
argument of the pulse test to enable the shortening of the classical interference test did no
longer hold.
However, the pulse test still has an application when the background trend and noise in the
reservoir is such that a small response of the interference test can be difficult to detect and
find in the pressure history. It is easier to recognize the pulsing behaviour resulting from a
pulse test and is not easily lost when the time lag of the response is long.
If one is planning and programming a pulse test one should be aware that there is a
phenomenon that can easily perturb the results. Many reservoirs are under the influence of
tide effects which overall resembles the response from a pulse test. Therefore it is important to
remember the tide influence on the background trends and noise and we need to know the
tide amplitude and frequency. The pulse test must be programmed to be ‘out of phase’ with
the tides and the response amplitude must be such that the two signals can be differentiated.
The early studies of the pulse tests proposed methods for manual analysis. At the time the
proposed method turned out to be the best advantage of the pulse test compared to
conventional interference testing. The method was based on the ‘tangent method’ and is
illustrated in the below figure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p452/558
The correlation charts were developed for a homogeneous infinite reservoir with line source
wells.
The leading parameters in the correlation charts based on the parameters defined in the
‘tangent’ approach as defined above:
t p
F cycle time divided by pulse time
tc
tL
(t L ) D dimensionless time lag
tc
ktc
(tc ) D dimensionless cycle time
948.3r 2ct
kh
pD p dimensionless pressure
70.6qB
The correlation charts are not shown in this book as these charts are seldom used nowadays
as these methods have been replaced by a new modern approach (the right stuff). However,
older textbooks on pressure transient analysis will carry these charts so they are readily
available to the users.
The example below illustrates a history match of a pulse test where the pressure response is
influenced by tide effect.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p453/558
The figure below illustrates the typical geometric scenario of a vertical interference test.
Type curves were developed to allow conventional analysis by hand in the 1980’s (Kamal). The
curves had to be developed for a number of geometrical situations as both the active and
observer perforations have to be identified and appropriately placed with respect to each other
and the upper and lower boundary. Using the type curves it was possible to estimate both
horizontal and vertical permeability.
Today we use modelling as the analysis method. We can apply directly the models developed
for the formation tester tools and can thus determine all the pertinent parameters of the
reservoir including the vertical permeability.
Below is illustrated the model match of a vertical interference test in a homogeneous infinite
reservoir with vertical anisotropy. The response and the model match of the active and
observer perforations are shown in the same graphic.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 11 – PTA: Special test operations - p454/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p455/558
12.A Introduction
The objective of any dynamic data analysis is the best possible understanding of the system
‘Reservoir / Well’. To achieve that, it is absolutely necessary to dissociate the respective
influences of the reservoir side and of the well side. Not only the performance of the system
depends on both, but also the analysis of the reservoir requires being able to correct the data
for the wellbore effect in order to extract the pure reservoir response.
The following paragraph will deals with the ‘well modeling’, necessary to correct the pressure
data for depth and to include the wellbore effects in the reservoir analysis.
Objection: Needless to say the idea of IPR and AOF is an anathema to the purist Pressure
Transient Analyst. Everyone knows that, even in the optimistic case of an infinite reservoir, the
pressure will continue drawing down when we are at constant rate, and the rate will keep
declining when we are at constant flowing pressure. So, from a well testing point of view, this
makes as little sense as the notion of a productivity index.
Objection to objection: Of course this objection is not completely correct. There is a regime
where there is such a relationship and that is Pseudo-Steady State. When PSS is reached the
shape of the pressure profile stabilizes and moves downwards. A relation between the
production and the difference between the average pressure and the flowing pressure can be
established.
Objection to objection to objection to objection: This will not be too far out and, in any case, it
has proven to work relatively well in the past, especially for gas reservoirs. In addition, IPR
from PSS models are the best we have to simulate production networks and input in reservoir
simulators where the details of the transient responses are not this important. IPR / AOF are
useful because they give accurate ideas of the performance of the wells, and they are really
the least bad thing to use when we optimize reservoir production.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p456/558
These equation parameters can either be determined by adjusting the IPR curve to measured
well test data set or they can be calculated using empirical equations.
It is the simplest inflow performance relationship where the inflow is directly proportional to
the drawdown.
Q
PI
Pr Pwf
Where:
It just requires a minimum of two measured rate and pressure values under flowing conditions.
It requires the knowledge of the well and reservoir drainage area parameters.
Oil case
Where:
S skin m viscosity
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p457/558
Gas case
Where:
S skin m viscosity
It was first used in gas cases, where the turbulent flow cannot be ignored or neglected, then it
was extended to oil cases.
The measured production rates and their corresponding flowing pressure values are used to
determine the C & n coefficients values.
Gas case
q C p 2 pwf
2 n
Where:
n the turbulent flow exponent, equal to 0.5 for fully turbulent flow and equal to 1 for
laminar flow.
Oil Case
In 1973 Fetkovich demonstrates that the same can be used for oil wells.
The effect of the reservoir turbulences can be modeled with the use of the back pressure
equation:
Q C p 2 pwf
2 n
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p458/558
Where:
m( p) m( p) aq bq 2
Where:
The method consists in getting from a multirate well test the pressure values corresponding to
the production rate and to determine graphically the equation coefficients values, respectively
C&n or a&b, then, eventually to calculate the Absolute Open Flow.
Another approach is to evaluate the a&b values from empirical equations using the well and
reservoir parameters values as input (i.e. Jones method below).
2
Q Pwf Pwf
1.0 0.2 0.8 Pr
Qmax Pr
If the reservoir pressure is above bubble point, the Vogel IPR is only valid when the well
flowing pressure is below the bubble point pressure.
In this case, the combination of Vogel and other IPR method is recommended.
Pwf
2
Q
PI .Pb 1.0 0.2 Pwf
Pb 0.8 Pb Qb
1.8
With:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p459/558
PI Pb
Qmax Qb
1.8
Where:
The method consists in using pressure and production data to determine the Pi, then the Qmax
value.
A second approach is based on the fact that only the laminar coefficient of the equations
depends on the flow duration, the NonDarcy coefficient remains independent.
The test design includes short flow periods of equal duration, not necessarily stabilized, at
various rates followed by shut-ins of equal duration until stabilization which are not necessarily
the same duration as the drawdowns. The resulting values are used to determine the
NonDarcy flow coefficient.
An extended flow period until stabilization allows determining the correct laminar coefficient.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p460/558
The flowing pressure drawdown values are calculated from the initial average pressure.
The modified isochronal test, proposed by Katz et al in 1959, is characterized by short shut-ins
between production periods, of equal duration, with neither of the periods necessarily
stabilized. The final flow to stabilization is followed by a final long shut-in.
The flowing pressure drawdown values are calculated from the previous last shut in pressure.
q C p 2 pwf
2
n
q C m p m pwf
n
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p461/558
log p 2 p wf
2
versus log( q)
Or
The n value is calculated from the slope, C from the line intersect.
Then:
AOF C p 2 patm
2 n
m( p) m( p) aq bq 2
m( p ) m( pwf )
versus(q)
q
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p462/558
The b value is calculated from the line slope and the “a” value from the intersect:
The AOF is then:
a a 2 4bm( p) m( patm )
AOF
2b
Oil case
The equation:
With:
1 1
1.4352 1012 l B 2
b rw re
2
hw
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p463/558
viscosity
Gas Case
m( p) m( p) aq bq 2
But the parameters a and b are estimated from the following empirical equations:
A 2.2458
1495.6T log 2 log 0.87 s
rw CA
a
kh
1299.15TD
b
kh
0.00003gg
D
hrw k 0.333
Where:
S skin µ viscosity
Other similar methods exist for various well geometries, the difference remains in the empirical
equations.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p464/558
Classically, the static pressure (pi, p*, p bar, final build-up pressure) are corrected:
- from the gauge depth to the sandface using the static well static gradient.
- from the sandface to a common reservoir datum using the reservoir gradient taking into
account any gradient changes if it is necessary to move through a fluid contact in the
reservoir.
This correction is usually done manually and is essential to establish reservoir pressure trends,
declines and depletion rates.
Little attention has been put on the correction of flowing pressures and the fact that the skin
calculation returned by Pressure Transient Analysis is in fact at the gauge level, therefore it
includes the pressure loss due to friction between the sandface and the gauge.
In that way, the skin, and specially the rate dependant skin, can be considerably over
evaluated. We are only interested in sandface values but this problem has largely been ignored
by the well test community until recently.
Most Pressure Transient and Production Analysis software packages include such corrections.
This can be a correction of the model to simulate the pressure at a given depth, or to correct,
a priori, the pressure history from the gauge depth to sandface.
A temperature profile;
PVT definitions;
The below figure shows a ‘Vertical intake curve’; pressure vs. rate at a selected well depth
using a fixed GOR and water cut.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p465/558
Orkiszewski: Developed using work from both Duns & Ross and Hagedorn & Brown. It uses the
Griffith and Wallis method for bubble flow, a new method for slug flow, and Duns and Ross for
transition and mist flow.
Hagedorn-Brown: Developed experimentally using a 1500ft test well with 1″, 1.25″, and 1.5″
tubing. The correlation is used extensively throughout the industry and is recommended in
wells with minimal flow regime effects.
Beggs-Brill: This correlation was developed experimentally using 1″ and 1.5″ pipe, inclined at
several angles. Correlations were made to account for the inclined flow. The correlation is
recommended for deviated or horizontal wells.
Mukherjee-Brill: Developed experimentally using 1.5″ steel pipe inclined at several angles. This
includes downhill flow as a flow regime. This is recommended for inclined or horizontal wells.
Dukler-Eaton: Based on 2600 laboratory and field tests to generate an expression for frictional
pressure losses along pipelines. It can be used for horizontal flow.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 12 – Well modeling & Performance Analysis - p466/558
Lift curves usually provide the pressure drop between the well head and lift curve depth.
However, it can happen that they correspond to a pressure drop between a given depth that is
different from the well head and the lift curve depth.
External lift curves are discrete data. When using them the pressure drop calculations are
performed by interpolations in the lift curve table in each of the required dimensions.
Therefore it is recommended to provide as many lift curves as possible to cover the widest
possible range of situations such as varying rates, phase ratios and well head pressures.
Multiphase cases are treated using multiphase flow correlations. In the event that the
interpreter has identified more than one phase rate, Perrine’s method is usually used and the
phase ratios are calculated at each step from the loaded measured multiphase rates and used
in the intake calculator for the pressure depth correction.
If a non-linear numerical model is used the numerical model output will provide sandface
phase rates. The phase ratios are calculated at each step from the simulated multiphase rates
and used in the intake calculator for the pressure depth correction.
Pressure drop correlations are valid under dynamic conditions only, not during build-ups or
fall-offs. To ensure continuity of the corrected pressure, the limit of the pressure drop when
the flowrate tends to zero is used during shut-ins. With the flow correlations, this amounts to
dividing the tubing into small segments where each contains phase rates corresponding to the
amount given by the flash PVT, and consequently the deduced holdups.
Correcting the model: In Saphir, when the intake pressure model has been defined, the
interpretation engineer will decide that when generating the model the model response will be
corrected to gauge depth.
The downhole rates are calculated by the model and will therefore incorporate wellbore storage
effects. This ensures that, with significant friction, there will be no discontinuity in the
corrected model when the surface rate changes.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p467/558
However the trend seems irreversible. Unit prices are falling, and reliability is increasing, and
the installation cost for a permanent gauge in a new development well is now marginal
compared to the overall cost of a brand new well.
When they are placed in the hole, PDG have already been sold for the sake of reservoir
surveillance and the wish to know what is happening in real time. Any additional use of this
data, as presented in this chapter, can be considered as the icing on the cake, although there
are some additional steps to take in the acquisition, storage and retrieval of the data in order
to properly fit our needs.
The interest in PDG data goes beyond the simple knowledge of pressure and temperature at
any given time. The combination of the well production, when known, and this pressure data is
a good candidate for mainly two types of analyses and for real time rate allocation. Fig. 13.A.1
shows a typical set of permanent downhole pressure data. The period covered is approximately
three months. With an acquisition rate of one point every five seconds, this corresponds to
around 1,500,000 raw data points.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p468/558
Each peak in the data is a shut-in. The following figure shows an expanded version of one of
these peaks. If we have a fair idea of the producing rates, and this may in itself be an issue,
each peak is a good candidate for Pressure Transient Analysis (PTA). It is potentially the
equivalent of a free, but not designed well test. In addition, comparing these successive build-
ups in time will provide information on how the well and the reservoir within the well drainage
area have evolved between these build-ups.
In order to get a reliable last flowing pressure and analyse the early time of each build-up we
will use the smallest time steps, i.e. the highest frequency we can get, on repeated but
relatively short time ranges.
On the other hand, in the following figure we have ignored the peaks and focussed on the
producing pressures, as a low frequency signal over the whole time range of the permanent
gauge survey. These pressures, combined with the production rates, may be used for
Production Analysis (PA) if, again, we have some idea of the well production.
In order to use these data, we need a low frequency extraction of the producing pressures
over a relatively large time range.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p469/558
The usage of permanent gauge data presented in the previous section and developed later in
this chapter is possible only because we are getting data at a high acquisition rate AND over a
large time interval. If we multiply high frequency by large durations, crudely, we end up with a
huge number of data points.
For an acquisition rate of every 5 seconds over a period of 3 years, we get 20 million data
points per channel. Acquisition rates and durations vary, but 20 million is a reasonable
geometric average, with a range (in 2010) typically between tens of millions and a few
hundred million data points per gauge channel, sometimes referred to as tags in the storage
databases. And with the life expectancy of the permanent installations increasing all the time,
this number just grows to mind-boggling values.
Answering these questions is generally half of the work. Historically, storage and access
options would be decided by the people that purchased the gauges and essentially ‘just’ saw
them as a real time surveillance tool. Getting back to ALL the raw data from the beginning of
the acquisition was not really top of the agenda. But we will see in the following that this is
precisely what is needed; ALL data, since the beginning, with NO prior editing.
Low frequency data (PA): Considering that rates are typically given daily, 1 pressure point
per hour is more than enough. For a 10-year survey, we need less than 100,000 points.
High frequency data (PTA): Considering we will typically find 100 or less relevant build-ups,
and considering that 1,000 points extracted on a logarithmic time scale will be suitable for
analysis, we need another, albeit different, 100,000 points.
So even for the largest data sets 200,000 points are sufficient to satisfy our processing needs.
This is around 100 times less than the median size of a raw data set. Unlike the number of raw
data, this reduced number is also well within the memory and processing capability of today’s
(2010) PCs.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p470/558
The challenge is how to get to this reduced number of relevant points. The answer lies in
applying a ‘smart’ filter that picks and selectively reduces both low and high frequency data.
Then we need to be able to transfer the filtered data to the different analysis modules. The
work flow is shown in the following figure.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p471/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p472/558
Curve 1 shows a set of raw data corresponding to 12,000 points over a period of 5 days. The
noise level is high, and we see that a change in the trend occurs in the middle of the display
window.
Curve 2 shows the wavelet filtered signal with a low threshold. With such a setting the
significant break and a lot of noise is considered significant.
Curve 3 shows, conversely, the result with a very high threshold. All features in the response,
including the break, are below threshold, and the wavelet acts as a standard low pass filter.
Curve 4 shows the result for an intermediate value of threshold that clears most of the noise
but keeps the break intact. We will use this setting.
Once the correct threshold is set, we still have the same number of points but it is now
possible to reduce them using simple post-filtration, which is typically a combination of time
intervals and maximum pressure change.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p473/558
We use two basic tools: a normalized scaling function used to define a low pass filter, and
a corresponding wavelet function used to define a high pass filter. These functions must
respect the following conditions:
( x)dx 1 and
( x)dx 0
A simple example for functions and is shown in the following figure. In reality the functions
are a little more complex in order to avoid numerical effects, we will see this in a later section
discussing wavelet processing.
These functions are used to decompose a given signal (our original data or some transformed
data) into two component signals: the complementary transform Ca and the wavelet transform
Wa, by respective convolution of the original data with the scaling and wavelet functions.
1 xt
Ca (t )
a
f ( x) (
a
)dx
1 xt
Wa (t )
a
f ( x) (
a
)dx
If we consider our example functions, the complementary transform Ca(t) is the average of the
signal f on the segment [t-a/2,t+a/2]. This is our ‘low pass’ filter.
If the signal is constant or slowly changing over the interval [t-a/2,t+a/2], the wavelet
transform is zero, or very close to zero.
If the signal is random, or oscillating at a much higher frequency, the wavelet transform will
also be very close to zero.
It is only if substantial variations occur at the scale of a, for example, if the signal has a
shape like over this interval, then the wavelet transform will be positive or negative.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p474/558
If there is a break of the signal at time t, one can expect that the wavelet transform,
whatever the considered frequency, is likely to be strongly positive (break downwards) or
negative (break upwards).
One remarkable property of these transforms is that there is a numerical way to make these
transformations reversible. If we decompose a signal into a wavelet transform and a
complementary transform, we will be able to re-create the original signal from these two
transforms by a reverse operation. So this dual transforms act as a ‘projection’ of the signal
into two complementary spaces. This is only possible because the operators and have been
carefully chosen. One operator will correspond to one operator , and reverse.
Important: Original raw data is generally not evenly sampled. The required initial interpolation
may have a large impact on the process, as any information in the raw data lost in this initial
interpolation will be lost for good. This is why the frequency choice is important.
The transforms given in the equations above are replaced by numerical algorithms on discrete,
regularly sampled data sets. They will not be detailed here.
The function Wa(t) depends on the level of noise of frequency 1/a around time t. If the noise is
high, or if there is a break in the data at time t, the value of Wa(t) will be strongly negative or
positive. We select a threshold value THR, which defines the value of Wa above which we
consider the signal should be kept. We define then a modified wavelet function:
Wa (t ) THR Wa(t ) Wa (t )
Wa (t ) THR Wa(t ) 0
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p475/558
Instead of recombining Ca(t) with the original wavelet transform Wa(t) and arriving back at the
original signal, we recombine Ca(t) with the modified wavelet transform Wa’(t).
When the noise level corresponding to the frequency 1/a is small, i.e. when the wavelet
transform is below threshold, the function Wa’(t) is set to zero, and after recomposition the
data will have been smoothed out. When the wavelet transform is above threshold, the
function Wa’(t) is not truncated, and after recomposition the noise/break is kept.
If we had N evenly sampled data points on the original signal f, after decomposition we now
have N/2 evenly sampled data points in each of the transforms. The total number of points
remains N, but as we have two signals the time interval is now 2a.
The process must first interpolate (I) the raw data to start with a set of points Ca with a
uniform time spacing a. The choice of a is discussed in a later section.
The filter will not interpolate and process ALL raw data in a single run. This would be far
beyond the computing power of today’s (2010) computers. The idea is to take windows of
typically 100,000 to 1 million data points, and work successively on overlapping windows in
order to avoid end effects.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p476/558
The signal Ca is decomposed (D) into a complementary transform C2a and a wavelet transform
W2a. The total number of points remains the same, half with C2a and half with W2a. The time
spacing for each of these series is 2a, and the frequency is now half the original. The signal C2a
is in turn decomposed into C4a and W4a, C4a is decomposed into C8a and W8a, until the desired
number of decomposition levels is reached. Here we will stop at C8a.
At this stage, the original interpolated signal Ca was decomposed into four series: C8a, W8a, W4a
and W2a, representing the same total number of points as Ca. If we were recombining these
components in the reverse process, we would return to our original signal Ca.
The data is de-noised by applying the Threshold (T) to the different wavelet transforms. The
new signal Ca’ will be created by successively recombining (R) C8a with the modified wavelet
transforms W8a’, then recombining the resulting C4a’ with W4a’, and finally recombining the
resulting C2a’ with W2a’.
We now end up with the same number of points, but this time large sections of the data, the
producing part, will be smoothed, and this will allow data elimination with simple post-
Filtration (F).
In order to avoid any fatal improvement due to the filtration, the process will typically store
the break points, i.e. the points at which the wavelet functions had exceeded the threshold,
and instruct the filtration process to keep them no matter what.
On a typical set of permanent gauge data, the ratio between the number of raw data points
and the number of filtered points will range between 100 and 1000.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p477/558
The parameters that define the wavelet process may be automatically set or controlled by the
engineer performing the filtration. In the following list we specify the parameters that will
generally be automatic or controlled, and the following sections will describe the influence of
the controlled parameters.
Selecting the smallest time interval between consecutive raw data is not a solution as
acquisition times are not regular, and it will not guarantee that the raw data points are taken.
The highest possible frequency, i.e. the smallest possible a, (e.g. the resolution of the time
information) does not work either as the amount of interpolated data would be unmanageable.
Filtration with an initial interpolation sampling of one tenth of a second would guarantee that
we will not miss anything, but it would involve one or several billion points.
Furthermore, starting with a very high frequency has a major drawback. For each additional
level of decomposition there is only a doubling of the time stepping. As the number of these
decomposition layers is limited, we might miss the frequency of the real noise.
The solution is to select an initial time stepping that fits the engineer’s needs. A time stepping
of 1 second will work but will involve CPU demands that may not be worth it, and the de-
noising of the production data may be insufficient. If the interest is not at all in high frequency
data, a time step of one minute will be more than enough and very fast. When one wants to
pick the high frequency at a reasonable expense, a time step of 10 to 20 seconds will be a
good compromise. The exact point of shut-in may not be spotted exactly, but still the data will
be usable for pressure transient analysis, and it will be possible to return to the raw data and
selectively reload specific sections of interest.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p478/558
The first thing that one has to do when ‘connecting’ to a PDG data tag is to have an idea of
what the data looks like globally. Having a quick preview of the data seems to be an obvious
task, but this is not the case when one has to connect to 300,000,000 data points on a slow
server. To do this, the process will request, when possible, the total number of data points and
request to send one data every n (typically 1,000 to 10,000) in order to get a fast preview of a
few thousand points. Such a preview is shown below.
This may be the time for a surprise. In many cases the data were stored and never looked at
again. Here are some of the typical, irreversible problems that may be encountered at this
stage:
No data! Data was stored over time, never checked and we only have zeros.
Buffered data! Only the last, say, three months are present. The data historian was
programmed to only buffer the data for so long. Data older than the buffer were
irreversibly erased or massively decimated.
Bad data! Data are very noisy and scattered and hardly usable.
Slow access! The time to get this initial preview takes forever. One may analyse the process
to get the data or improve the bandwidth between the machine and the data historian.
There may also be too many simultaneous accesses to the same historian. This problem will
generally be addressed by mirroring the data from the historian to the engineer’s machine
(Diamant local) or the dedicated server (Diamant Master).
Apparently buffered data! On the face of it this is the same symptom as the buffered data,
except that fortunately, in this case, the data is stored. The problem is that the setting of
the data historian is such that older data is not accessible to the client unless the historian
setting is changed.
Partly irrelevant data! For whatever reason there were some times when the gauge failed,
or was turned off, or for whatever reason gave irrelevant readings. The engineer may then
wish to graphically select a data range out of which data will just be ignored.
After the preview, and unless an irreversible problem was detected, the engineer may proceed
to set the wavelet filter and post-filtration. At load time the wavelet will be applied on
consecutive packs of data of a given buffer size. The first pack of points, within the load range,
is loaded and displayed. The typical size of a buffer will be 100,000 points but this can be
changed by the engineer. 100,000 points corresponds to six days of data stored once every
five seconds.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p479/558
Fig. 13.E.1 – Data preview Fig. 13.E.2 – Selecting the first buffer
of points
The engineer will then specify the wavelet and the post filtration. At this stage it is the
engineer’s decision to set the different parameters, zoom on some parts of the data (shut-ins,
noisy sequences, etc) in order to ensure that the data reduction does not affect the
information we want to keep in the de-noised signal. An example of wavelet setting is shown
below. The process will also show the difference between the original data and the de-noised
signal.
Fig. 13.E.3 – Setting the wavelet filtration Fig. 13.E.4 – Global view of the load
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p480/558
Once the engineer is satisfied by the filter the real load can start. The process takes raw data
by packs corresponding to the size of the buffer. In order to avoid tail end effects, the different
packs will overlap and the first / last de-noised points will be systematically ignored. The
process will typically show the global load, i.e. where the current buffer stands in the overall
data history and a local view of the buffered data with both raw and de-noised data. At this
stage the engineer may stay and watch. If the data is locally mirrored, the process takes
around one second of CPU per buffer of 100,000 points.
During the load the engineer may realize that the filter setting is incorrect. The load can be
interrupted and the settings modified and tried. Once the setting is modified the load can be
restarted again, or the new setting may take over the old setting from a time selected by the
engineer.
Once the load is over, i.e. when all data currently stored in the data historian have been
treated, the process stops and statistics are displayed.
At this stage, the application has maintained a persistent link with the data historian and will
know how to get the new incoming data. With a time set by the engineer, typically one week,
or on the engineer’s direct request, the same process will re-establish connection to the data
historian, get the new raw data and apply the wavelet filter to this new data to increment the
filtered data set. At this stage the engineer probably has better things to be getting on with as
the refreshing and updating of data becomes an automatic process.
Fig. 13.E.5 – Local view of the load Fig. 13.E.6 – Load finished – Display and stats
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p481/558
Most aspects of Pressure Transient Analysis developed in this document are relevant to
interpret buildups recorded by PDG i.e.: there is nothing specific in the methodology itself.
However, the process will have to be adapted to the origin of the data, its volume and its
signature. The main differences with a standard well test are the following:
In most cases, the rate information is the daily reallocated production. Rates are generally
not synchronized with pressures. Though it will not be detailed here, with rare exceptions
the creation of the rate history will be the main source of trouble after the link with the
data historian and the filtration is established. Furthermore, it is now possible to perform
the synchronization automatically using Diamant, in case that this might be an issue.
There are not one or two shut-ins, but typically 20, 50 or more. Because no engineer will
have a spare three months to interpret this data, productivity tools have been developed to
process all the buildups in a single work flow.
Many of these shut-ins are ‘free’, however they are rarely designed on purpose: they can
come from unscheduled automatic safety shut-downs of the wells, well operations or well
maintenance. Some of the shut-ins will be too short to carry any relevant information, or
they will follow a production phase that may not be stabilized and for which we may not
have any rate value. As a consequence, only part of the recorded build-ups will be valid
and useable.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p482/558
The shut-ins are spread over the life of the well, and one cannot assume that the well
model, or at least the skin factor, will be constant throughout PDG recordings. However,
the succession of buildups will be an opportunity to study the evolution with time of the
well productivity, in particular the skin.
The loaded data is sent from the data module (Diamant Master) to the PTA module (Saphir) by
drag-and-drop or transferred seamlessly by a click of the mouse. All build-ups are extracted
and shown on the loglog plot. Typically, an option will allow the engineer to select only build-
ups of a certain minimum duration. The resulting history and loglog plots are illustrated below.
The main problem when obtaining 20 to 100 build-ups, is that we certainly do not want to
extract and interpret all of them, one after the other. In order to face this problem (too much
data!) we need to use productivity tools that will run statistics and qualify the build-ups on
which we are ready to spend some time.
One ‘low tech’ way to do this is to return to IARF (yes, the old stuff). An example of this
method is, on the loglog plot, to select the time of stabilization of the majority of the
derivatives and draw a series of semilog straight lines from this. An option could be to set a
vertical range of acceptable permeabilities, only keep the build-ups which calculated k within
this range, then run a global regression where the unknown is one single value of k and one
skin value per build-up. The resulting straight lines and evolution of skin in time is shown
below.
Fig. 13.F.3 – Multiple line analysis with the same k (left) and plot of skin vs. time (right)
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p483/558
Another way to use this information is to select consecutive quality build-ups that tell, more or
less, the same story. We can then use recent work on deconvolution to end up with a response
that will be longer and may provide more information on the reservoir boundaries and/or the
drainage area of the well. This tool must however be used with care. The extracted build-ups
and deconvolved responses are shown below.
Once the engineer has focussed on a few build-ups of interest, it may be wise to return to the
raw data and reload some of these. After all, what we have loaded so far is the result of a
careful but not fool proof filtration. Because the data module (Diamant Master) maintains the
link with the data historian, it may be possible to re-visit this raw data and, for the time range,
corresponding to the build-ups of interest, reload the raw data with a different filter or none at
all. The originally filtered data will be replaced by the reloaded data on this time range only.
New data will still be loaded with the default wavelet setting.
The original filtered data and the reloaded data are compared in the figures below. In most
cases, the wavelet filtration will not have affected the response and could have been used for
analysis, but the advantage of the reload is that we are now absolutely sure the signature was
maintained.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p484/558
Traditional analysis techniques, using type-curves (e.g. Fetkovich) or straight lines (e.g. Arps)
are based on the assumption of a constant pressure production. This is the ‘old stuff’ described
in the chapter Production Analysis. Just looking at the above figure it is clear that the constant
pressure assumption is rarely valid; for any practical purpose the traditional (old) methods are
not very useful. There is no way to select a reliable curve match in the Fetkovich type curve
plot or draw a reliable decline in the Arps plot (there is no decline).
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p485/558
The next step in Production Analysis is the use the modern tools that also takes into account
the pressure history. The loglog plot, showing rate normalized pressure, and the Blasingame
plot, which plots the productivity index, can be highly scattered as they plot against material
balance time, see illustrations below. The models, which are plotted as continuous lines, will
jump back and forth when the rate changes. The diagnostic capability is limited as we are
essentially matching a cloud of measured points against a cloud of simulated points.
Therefore, in the majority of cases the only diagnostic tool will be the history plot, and the goal
will be to globally history match the data (see below). Forecasting of the well performance
from an anticipated producing pressure is also illustrated below. To improve confidence in the
process it is always recommended to use the models found in the PTA of the buildups and
constrain the PA model with that model, eventually integrating the evolution of skin with time.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p486/558
When this integrated environment was released and tested on a few pilot sites, there was one
bit of good news and two bad. The good news was that this process was exactly what
operators needed to face the challenge brought by PDG data. However we quickly faced two
major issues: the poor access to the data historian and the lack of sharing of this process
within a workgroup.
The wavelet processing and filtering can handle 100,000 data points per second. The ability of
a data historian to deliver the points will be typically 100 to 1,000 times less.
One of the first adaptations of our workflow was a mirroring of the raw data. This is sequential
and is done, once for all, with only incremental loads on a daily or weekly basis. The data is
mirrored using a storage option in order to have fast and random access to the raw data, using
a direct binary access format sped up by an indexing that have immediate access to the first
data of every calendar day.
The workflow is described in the following figure. There are distinct processes:
Data mirroring: The engineer connects to the right tag and the mirroring starts as a
background process. Because there may be several gauges involved, the mirroring will cycle
in an infinite loop on the different gauges until the end of the tags is reached. When the end
is reached for a given tag, the mirroring process will re-connect and update the mirror on a
timing controlled by the engineer.
Data filtering: When there is enough data in the mirror to start working on it, typically
100,000 points, the engineer may define the wavelet algorithm and post-filtering according
to the process described in section Wavelet filtration - practice. The filtering is executed
until the end of the mirror is reached. Once this is done the process will return to the mirror
on a timing controlled by the engineer (typically it will require 100,000 more points) and
filter the next pack of data.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p487/558
PTA and PA: By drag and drop the filtered data is sent from Diamant to the Saphir and
Topaze modules for analysis.
Partial reload: When a build-up is selected, the engineer can, from Diamant, return to the
mirrored data and replace a range of filtered points by the original points, or filtered data
with a more refined setting.
Storage: While mirrored data are stored in the dedicated BLI files, all the rest, including the
serialization, is stored in a dedicated Diamant file.
The workflow of the workstation solution turned out to be satisfactory in the case of relatively
small structures where only a few engineers are involved in this process. However, in the pilot
tests performed by large operators very serious issues pointed to the limitations of a
workstation solution:
The resulting filtered data were stored in KAPPA proprietary format the required KAPPA
application to read. There was no export to third party databases.
Each engineer would have access to the data historian, considerably increasing its work
load.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 13 – Permanent Gauges & Intelligent Fields - p488/558
The solution runs on a dedicated server that is continuously mirroring and filtering the data.
Export to a third party database allows filtered data to be permanently shared and accessible
to third party applications. Filtered data and other objects are stored in a local database and
are accessible and may be shared by all engineers. Mirroring is done only once for each gauge.
A given gauge may be subject to several filtering options, but the result is also shared.
Some engineers, given the privilege, may be entitled to mirror tags and create new filters.
Others will just take the result of the filtration and drag-and-drop into their analysis
applications.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p489/558
14 – Production logging
OA - OH
14.A Introduction
This chapter is not an exhaustive manual on the interpretation of Production Logs (PL). It does
not go into as much details as what is done for PTA and PA. However it summarizes pretty well
the history and current status of the methodology, and how PL Interpretation integrates in the
analysis of Dynamic Data.
The first production logs were temperature surveys run in the 1930’s. The first spinner based
flowmeters appeared in the 1940’s. They were complemented by fluid density and capacitance
tools in the 1950’s. Together with some PVT correlations and Flow Models, these were the
elements required to perform what we will call today a ‘classical’ multiphase interpretation.
The first probe tools were introduced in the 1980’s. Although they acquired local
measurements at discrete points across the borehole cross section the initial objective was to
reduce the local measurements to a normal pipe averaged value. In the late 1990’s, these
probes were packaged into complex tools that began to measure the velocity and holdup
distributions across the pipe in order to address the challenges of understanding the flow in
complex, horizontal, near horizontal wells.
PL interpretation was too long overlooked by the Industry. It is not part of a standard
formation in Petroleum Engineering departments. To our knowledge only Imperial College in
the UK has an education module dedicated to PL. With a few noticeable exceptions most oil
companies used to consider that PL was a simple process of measuring downhole rates.
Specialized interpretation software was developed and used by Service Companies to process
the data, either on-site or in a computing center. This situation changed in the mid-90’s, when
commercial PL software became more accessible within in the industry, and oil companies
began to realize that PL Interpretation was, indeed, an interpretation process and not just data
processing.
PL is an in-well logging operation designed to describe the nature and the behavior of fluids in
or around the borehole, during either production or injection. We want to know, at a given
time, phase by phase and zone by zone, how much fluid is coming out of or going into the
formation. To do this the service company engineer runs a string of dedicated tools:
PL may be run for different purposes: monitoring and controlling the reservoir, analyzing
dynamic well performance, assessing the productivity or injectivity of individual zones,
diagnosing well problems and monitoring the results of a well operation (stimulation,
completion, etc). In some companies the definition of PL extends up to what we call Cased
Hole Logging, including other logs such as Cement Bond Logs (CBL), Pulse Neutron Capture
Logs (PNL), Carbon/Oxygen Logs (C/O), Corrosion Logs, Radioactive Tracer Logs and Noise
Logs. In this chapter we will focus on Production Logging per se, and explain the main methods
to interpret classical and multiple probe Production Logs.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p490/558
It is the authors’ recommendation that PL be run at early stage of the life of the well, in order
to establish a baseline that will be used later when things go wrong. This is generally done
when PL is identified as a necessary reservoir engineering tool (for example in extremely
layered formations), otherwise it will be a hard sell to Management.
Too often Production Logs are run when ‘something’ has gone wrong…
At the end of a successful interpretation, the PL engineer will have a good understanding of the
phase-by-phase zonal contributions. When multiple probe tools (MPT) are used this will be
complemented by very good looking tracks describing the areal flow.
Fig. 14.B.1 – MPT interpretations in Emeraude (FSI top and MAPS bottom)
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p491/558
PL allows the qualification and /or the quantification of a certain number of operational issues:
Formation cross-flow, channeling of undesired phases via a poor cement, gas and water
coning, casing leaks, corrosion, non-flowing perforations, etc.
Fig. 14.B.2 – Examples of usage: Crossflow, Channeling, Gas Coning and Water Coning
Production Logging can also be used to identify fracture production and early gas or water
breakthrough via high permeability layers. During shut-ins the movement of a water or oil
column can also be identified.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p492/558
PL tool strings may be run with surface read-out, using a mono-conductor cable, or on
slickline. Surface read-out allows a real time quality control of the data at surface. The logging
program can then be adjusted depending on the results. The slickline option is less expensive
and easier to run. However it is a blind process, as data are stored in the tool memory. There
is no way to check the data before the string is pulled out. Power requirement also limits the
range of tools that can be run on slickline.
A typical, classical tool string is shown in the figure below, right. Sensors are not located at the
same measure point of the string. As measurements are presented vs. depth, they are not
synchronized for a given depth, and this may become a serious issue when flow conditions are
not stabilized. In some cases one may have to stop the tool and check for the transients, in
order to discriminate noisy tools and real effects. Conversely, when running stationary surveys
tools are not recorded at the same depth. This issue of time vs. depth is the reason why
compact tools are recommended to minimize errors due to this limitation.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p493/558
Spinners are of various types, material and shapes, depending on usage. They turn with as
little friction as possible, and include internally located magnets which will activate powered
Hall effect switches, generating a pulse several times per revolution. If the magnets are
somewhat asymmetric they will give a way for the tool to detect the direction of the rotation.
Fig. 14.C.2 – Various types of spinners – Schematic of the Hall effect – Courtesy Sondex
The spinners are packaged in several types of tools. There are three main types of flowmeters:
Inline, Fullbore and Petal Basket. Other types are not described here.
Fig. 14.C.3 – Inline, Fullbore and Petal Basket flowmeters – Courtesy Sondex
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p494/558
Inline flowmeters have small diameters and can be used to log in completions with restricted
diameters (tubing, scaled up wells, etc). Conversely they have a low sensitivity and must be
selected to log high rates / high velocity wells. Because of the small spinner size, a good
centralization of the tool is required.
Fullbore flowmeters have larger blades that are exposed to a larger part of the flow cross-
section. The blades collapse in order to pass the tubing and other restrictions. They expand
and start turning when the cross-section becomes large enough. Fullbore flowmeters have a
good sensitivity and can be run for a wide range of flow rates and velocities. There may be
sometimes issues with injectors, where the blades may collapse when the flow coming from
above becomes too large. A lot of tools (see below) combine the fullbore spinner with a X-Y
caliper that will protect the blade, and expand / collapse the tool. Such set-up combining two
tools in one, creates a more compact tool string.
Fig. 14.C.4 – PFCS Fullbore flowmeter with built-in X-Y caliper – Courtesy Schlumberger
Petal Basket flowmeters concentrate the flow towards a relatively small spinner. They are very
efficient at low flowrates. However they are not rugged enough to withstand logging passes,
and are really designed for stationary measurements and the tool shape often affects the flow
regime.
It is important to realize that spinner based flowmeters do not measure rates. They do not
even calculate the fluid velocity. The output of a spinner based flowmeter is a spinner rotation
in RPS (or CPS for some tools). The process of converting RPS to apparent velocity, then
average velocity, and then ultimately rates, is the essence of Production Logging Interpretation
and requires additional measurements and assumptions. This is described later in the Chapter.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p495/558
To schematize we will need at least one more tool to get two-phase interpretations, and at
least a third one to get three-phase interpretations. Without the minimum number of tools,
additional assumptions will be needed.
If there are more tools than necessary, one will not be able to match all measurements exactly
at the same time, because of the nature of the calculations done (more on this later).
The first natural complement of the spinner type flowmeters are density tools. In a two-phase
environment, measuring the fluid density will allow discriminating the light phase and the
heavy phase, provided that we have a good knowledge of the PVT. There are four main tools
that may give a fluid density: gradiomanometers, nuclear density tools, tuning fork density
tools (TFD) and… pressure gauges after differentiation.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p496/558
14.C.2.a Gradiomanometers
The tools measure the difference of pressure between either sides of a sensing chip. The direct
calculation (P2-P1) must be corrected for the hydrostatic pressure of the internal column of
silicon oil in order to get the effective pressure (P B-PA). This pressure then has to be corrected
for deviation and friction effects in order to get a corrected density.
The acceleration term is generally ignored. For a given surface the friction gradient is a
function of the friction factor (f, calculated from the Reynolds number and the surface
roughness) the density, the relative velocity, the friction surface and the flow area:
dP fV 2 S
dZ
friction 8 A
We generally split the friction into tool friction and pipe friction:
dP dP dP f p V 2 D f t Vt 2 d
dZ dZ dZ 2
friction pipe tool 2 (D d )
2 2
2 (D d 2 )
The issue is whether the fluid present in the chamber is representative of the flow through the
pipe. The tool shows very quickly its limitations in deviated wells with segregated flow. The
existence of a radioactive source is also an issue.
This is a pretty recent type of tool and we will have to wait a little more to assess its efficiency.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p497/558
Capacitance and holdup tools are designed to provide the holdup of a particular phase. This
series of tools are a complement to the spinners in order to differentiate multiphase flow.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p498/558
Pressure is required for PVT calculations; it can be used as an indication of the production
stability; it can supplement a missing / faulty density measurement when differentiated; it
provides one of the key information with SIP.
Pressure gauges can be split into Strain gauges or Quartz gauges. With strain gauges, the
mechanical distortion caused by the applied pressure is the primary measuring principle. There
are several sensor types based on Bourdon tubes, thin-film resistors, sapphire crystals, etc.
In Quartz gauges, a quartz sensor oscillates at its resonate frequency. This frequency is
directly affected by the applied pressure.
Like the Pressure, the Temperature is used in PVT calculations. It can also reveal flow outside
the wellbore because of cement channeling leak for instance. The Temperature can be used
quantitatively, provided an adequate forward model for calculations is available.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p499/558
Fig. 14.C.12 – Independent X-Y caliper; X-Y caliper combined with fullbore spinner
Courtesy Sondex and Schlumberger
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p500/558
Part of the job planning is to account for the time it will take for the well to be stable. The
notion of stability is defined as a pressure variation over time since we know from Welltest that
the flowing pressure will usually not be strictly constant. Time lapsed passes will be interesting
to record evolution with time, for instance, the warmback following an injection period.
In order to get Selective Inflow Performance (or SIP) it is necessary to achieve more than one
rate. Typically, 3 rates and a shut-in are recorded, as illustrated below.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p501/558
In a multiphase situation, there are n-1 additional unknowns, n being the number of phases.
So in two-phase flow a density or holdup is required, and in 3 phase two such independent
measurements are required.
For all tools but the spinner, one pass would be sufficient for calculation. However, comparing
several passes for other tools is a way of judging the well stability. Having multiple passes also
provides more chances to have a representative measurement if the data is bad on some
section of some passes.
The spinner calibration, which is explained next, requires several passes at various logging
speed. A typical job will comprise 3-4 down and 3-4 up passes, as illustrated below. Passes are
normally numbered by increasing speed, and the slow passes are recorded first. This means
that Down 1 is normally the first and slowest pass in the well.
Stations may be recorded for certain tools that require it. In addition the ability to display the
measurement of a station versus time is a further indication of well stability or instability.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p502/558
Data editing - general: telemetry errors can introduce spikes on data which in turn will upset
the scaling and make the QAQC difficult. De-spiking should be achieved beforehand to avoid
this problem using for instance a median filter. Some tool responses may also be noisy, for
instance nuclear tool responses, and will usually deserve a processing with a low-pass filter.
Data editing – spinner / cable speed: when starting fast passes ‘yoyo’ effects may be
seen. Similarly oscillations may appear on the flowmeter as a result of the cable
sticking/slipping along the length of the completion. Those can be edited out with a sliding
window averaging.
Fig. 14.F.2 – Stick & slip (left) and yo-yo at start of passes (right)
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p503/558
Unsigned spinners will need to be corrected before processing, on Up passes in producers and
Down passes on injectors.
Depth matching: using the Gamma Ray or CLL all the available data should be set with
coherent depth. Sometimes the depth correction might require more than a shift.
Repeatability: the repeatability of readings from one pass to another will be an indication of
the well stability. Some tool readings are affected by the cable speed when the fluid mixture
changes drastically (density, temperature for instance) and this should be noted as it will
orient the choice of reference channels for the calculations.
Consistency: a first consistency check between sensors can be done at this stage. Beware
that some tools will need corrections before they can provide a quantitative answer (e.g. a
pseudo density – dP/dZ - in a deviated well). Some tools react to what happens behind the
tubing or casing (e.g. the Temperature) and will behave differently from the spinner.
Qualitative analysis: once all data have been cleaned, most of the questions can be
answered qualitatively but there are some pitfalls, like mistaking a change in spinner response
to an inflow/outflow when it is due to a change of ID.
Reference channels: for any measurement that will be used quantitatively in the
interpretation, a single curve must be selected or built. Most of the time this curve will be the
slowest down pass but some editing or averaging may be necessary. This procedure will not
apply to the spinner that needs to be calibrated first, in order to calculate velocities, before we
can define or build the reference velocity channel.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p504/558
The response of an ideal spinner run in a static fluid would be as plotted below, with 2 distinct
response lines for Up passes (negative CS) and Down passes (positive CS).
(1) The rps value is a linear function of the velocity; the response slope depends on the
spinner pitch; i.e. its geometry only.
(2) The spinner rotates with the slightest movement of the tool, i.e. the slightest
movement of fluid relative to the tool.
(3) The negative slope is lower, typically as the tool body acts as a shield; this is not the
case with a symmetrical tool like an in-line spinner.
In reality, the response is affected by the fluid properties as well as the bearing friction. The
equation below is a possible representation (SPE Monograph Vol. 14; Hill A.D.):
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p505/558
For PL interpretation we will consider that the calibration is still a straight line. Since this line is
approximating a non-linear function, it may vary with the encountered fluid. In addition, the
tool response is shifted by a threshold velocity, the minimum velocity required for the spinner
to rotate. This threshold velocity will depend on the fluid; typical numbers for a fullbore
spinner are 3 to 6 ft/mn in oil, 10 to 20 ft/mn for gas.
The plot above represents the spinner response in a no-flow zone as a function of the cable
speed. If the fluid is moving at some velocity Vfluid, the tool response will be the same, but
shifted to the left by Vfluid as shown below. The reason behind the shift is that since the
spinner reacts to the sum of (Vfluid + Cable Speed), the rps value for a CS value ‘X’ in Vfluid,
is the response to a CS value of (X+Vfluid) in the no-flow zone.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p506/558
Fig. 14.G.4 – Fluid velocity (left) and spinner response for the 6 passes (right)
Three stable intervals are considered represented by the sections in blue, green, and red. The
corresponding points are plotted on a rps vs. cable speed plot and the lines are drawn by linear
regression:
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p507/558
Historically when doing hand calculations, the usual method was to consider a spinner
calibration zone in stable regions in-between every perforations, as above. The velocity was
then calculated directly from the cross-plot for every zone. Today’s idea is that you only put in
spinner calibration zones because you think there has been a change of slope or threshold,
(usually due to a change of fluid type). In theory a single phase well only needs one spinner
calibration zone. In reality, having multiple zones – as long as they are stable - ensures that
any change is properly captured.
The apparent velocity for a point on a positive line is calculated based on the slope of that line
and the common positive threshold. The apparent velocity for a point on a negative line is
calculated based on the slope of that line and the common negative threshold. This mode is
suitable in case of single-phase fluid.
This ratio is equal by default to 7/12 = 0.583 but can be set from the value of a no-flow
zone. Obviously this can be used only on zones with both a positive and negative intercepts.
Independent Thresholds
This mode allows different thresholds for each calibration zone and is the most general one.
Note that the only problem with this mode is that on a zone where there is fluid movement, at
best, we can get the sum of the thresholds. Deciding the positive and the negative can be
done bluntly (i.e. halving the sum) or based on the split on the no-flow zone.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p508/558
The slope and threshold values for a calibration zone will apply to all calculations performed
within that zone. Data above the top calibration zone will use the top calibration, and data
below the bottom calibration zone will use the bottom calibration. In between consecutive
calibration zones the possible options are to move linearly from the values of a zone to the
value of the other, or to make this change occur sharply. From the previous discussion on
spinner response we know that the difference in spinner response is caused by the change in
fluid. Because the change in fluid property is local (to a particular inflow zone) rather than
spread over a large interval, a sharp transition is probably more liable. Emeraude offers all the
possibilities: progressive change, sharp change, or a mix of both.
Having decided how to obtain a slope and threshold for any depth, the procedure to get the
apparent velocity curve for a given pass is as follows: for a given measuring point, we have
the cable speed and the spinner rotation. From graphing the point on the calibration plot, we
follow the slope, get the intercept and correct by the threshold.
Producing one apparent velocity channel per pass allows an overlay and is a further check of
the well stability and the calibration adequacy. When this check has been made, the multiple
apparent velocity curves are typically replaced with a single average value (median stack or
lateral average). This single apparent velocity curve is the sole information required for
quantitative analysis.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p509/558
The spinner calibration allows us to get the apparent velocity, V APP, everywhere there is a
measurement.
To get a single phase rate we need the total average flow velocity, which can be expressed
from VAPP with some correction factor usually noted VPCF: VM VPCF VAPP .
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p510/558
Under a value of 2000 for the Reynolds number the flow is laminar with a parabolic velocity
profile. In this situation, the maximum velocity is twice the average leading to VPCF of 0.5 for
a small blade diameter. When the Reynolds number increases, the correction factor increases
from 0.5 and its value tends asymptotically to 1. Also, as the blade diameter tends to the pipe
ID, the correction factors moves towards 1.
The Reynolds number is expressed below for fluid density in g/cc, diameter D in inches,
velocity in ft/sec and viscosity in cp:
Dv
N Re 7.742 10 3
Obviously, the value we are seeking - the fluid velocity - is part of the equation meaning that
an iterative solution is required.
The classical solution is to assume a value of velocity, typically based on a VPCF of 0.83, then
calculate the Reynolds number, from which a new value of VPCF would be calculated, hence a
corrected estimation of the flow velocity.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p511/558
In modern software, this has been replaced by a regression algorithm, and the single phase
calculation is only a specific case of what is done for much more complicated processes like
multiphase rate calculations or the processing of multiple probe tools.
The principle of the nonlinear regression process is that we take as unknowns the results we
wish to get, here the single-phase downhole rate Q.
In the case of a single phase calculation the target is the apparent velocity calculated after the
spinner calibration.
From any value of Q in the regression process, we calculate the velocity, hence the Reynolds
number, hence the VPCF, hence a simulated apparent velocity.
This allows to create a function VAPP = f(Q). We then solve for Q by minimizing the standard
deviation between the simulated apparent velocity and the measured apparent velocity.
Q v N Re VPCF V APP
Simulated Apparent Velocity : V APP f (Q)
*
Measured Apparent Velocity : V APP
Minimize Error Function : Err V APP V APP
*
2
14200
14300
14400
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p512/558
In this water injector, five different calculation zones (grey) were selected to isolate the
contributions.
The non-linear regression described previously was executed on all of them to get Q.
The results are summarized by the QZT track where the value of a given calculation zone is
extended up to the inflow zone above, and down to the inflow below.
Inflow zones (black) can be distinct from the perforations (red) simply to capture the fact that
not all the perforation interval may produce or in this case, take fluid.
The QZI track represents contributions or injections and they are obtained by the difference of
the rate above and below an inflow.
The last rate track, noted Q, represents the application of the non-linear regression at every
depth, the point being to obtain a continuous log everywhere faithful to the log data.
Having this log provides in particular a guide to refine the position of calculation zones.
In the complete log (Q) or for the zone rates (QZT) the calculation of the rate at Depth 1 is
independent of the calculation at Depth 2. As a result those rates may entail contributions of a
sign or amplitude that is not physically justified.
We can address this potential inconsistency in a global regression process described later in
this document.
The final track above shows the target VAPP and the simulated equivalent in green. Note that
we arbitrarily decided to take the apparent velocity as the target function, rather than the real
tool response in RPS.
We could have integrated the spinner calibration in the regression process and matched the
RPS measurements for the different selected passes.
The simulated (green) curves and the QZT logs are referred to as schematics in Emeraude.
This may be relevant if one wants to match the production above the top producing zone with
the measured surface rates (if one relies on this). Numerically it amounts to allow a multiplier
to VPCF.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p513/558
14.I.1 Definitions
14.I.1.a Holdups
This definition was given before but it is repeated
for clarity. The holdup of a phase is the volume
fraction occupied by that phase. The Fig. opposite
shows a heavy (blue) and light (red) phases and
indicates the corresponding holdups.
In three phases with water (‘w’) oil (‘o’) and gas (‘g’):
Vs VL VH
When going uphill, the light phase will move faster and Vs will be positive. The opposite
situation will be encountered when going downhill where the heavy phase will go faster. The
slippage velocity is not something the PL tool measures, nor are the phase rates (at least with
conventional tools). Getting the rate values will only be possible if we can estimate the
slippage value using a correlation. There are many correlations available in the literature,
empirical or more rigorously based. For the time being let us simply assume that such
correlation will be available to us if we need it.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p514/558
YH YL 1 QH QL QT
QL QH Q QH QH
VS VL VH T
A YL A YH A 1 YH A YH
And to finish: QL QT QH
Holdup is a quantity we can measure directly, or infer from density. If we measure mixture
density and know the individual phase densities downhole, holdup can be obtained as shown
below:
L
H YH LYL YH
H L
Note that when density is measured with a gradio the reading needs to be corrected for
frictions. This correction requires the knowledge of velocity and the fluid properties so like in
single phase, such a calculation will necessitate an iterative solution scheme.
Since we know how to calculate the total rate from the single phase equivalent, all we need is
a way of determining the slippage velocity. In the simplest situations, this could be done
manually. The chart below shows the Choquette correlation, representative of bubble flow in
vertical well for an oil-water mixture.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p515/558
With such a chart, the slippage is obtained from the holdup and density difference. With a
spinner and a density measurement, the steps of a manual deterministic approach would be
straightforward:
(1) From the rates, the fluid and the local geometry, get the slip velocities
(2) From the slip velocities and the rates get the holdups
(3) From the holdups calculate the fluid mixture properties
(4) Calculate the simulated tool response using the relevant model (eg VPCF, friction
equations, etc).
The interest of such an approach is that any number and type of measurements can be chosen
as targets provided that they are sufficient and the problem is not undetermined (insufficient
measurements for the number of unknowns). Having redundant measurement is also possible
where the regression will try and find a compromise based on the confidence assigned by the
user to the different measurements.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p516/558
It is important to realize in the above procedure that slippage correlations are a necessary evil
to relate the rates to holdups because holdups are the quantities we are best at measuring.
When PL can measure phase rates or phase velocities directly, which is the Holy Grail of
Production Logging, the procedure can do away with slippage correlations.
In each regime, a specific slippage correlation will be applicable with extreme cases like Mist
flow, where no slip exists between the two phases. Many correlations will start with a
determination of the flow regime using a flow map, and continue with the application of the
corresponding slippage equation. Slippage correlations can be organized in the following
categories:
Liquid-Liquid: This category gathers bubble flow correlations (e.g. Choquette), correlations
developed for stratified flow (e.g. Brauner) and combinations of those.
Liquid-Gas: This is by far the largest population with many empirical correlations (Duns and
Ross, Hagedorn & Brown, Orkiszewksi, etc) and mechanistic models (Dukler, Kaya, Hassan &
Kabir, Petalas & Aziz, etc). Most of those correlations have been primarily designed for
representing wellbore pressure drops in the context of well performance analysis.
Three phase: There are very few such correlations and when they exist they are for a very
specific situation, e.g. stratified 3-phase flow (Zhang). In practice, three-phase slippage /
holdup prediction from rates are done using two 2-phase models, typically for the gas-liquid
mixture on one hand and the water-oil mixture on the other hand.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p517/558
For a given mixture rate Q, we consider the possible mixture and plot the corresponding
mixture density. The X axis value range from 0 (100% Light phase) to Q (100 % Heavy
phase). The density at those end points is the relevant single phase density. The equation
derived previously for QH is now used to express YH:
QH
YH
QT 1 YH VS A
This equation shows that without slippage, the heavy phase holdup would be equal to the
heavy phase cut. This situation is represented by the red line on the plot below. With slippage
velocity on the other hand, Vs is positive uphill hence the equation above tells us that YH should
be higher than the cut. The curve representing the case with slippage is thus above the no-slip
line. An opposite situation would be expected downhill.
Another way of considering the plot below, is to realize that for a given cut, the higher the
slippage, the heavier the mixture will be. This is because the light phase is going faster and
therefore occupies less volume in the pipe. The light phase is ‘holding up’ the heavy phase,
leading to more heavy phase present at any depth than there would be with no slippage. For a
given solution with slippage, the density will read heavier than if there is no slip.
Fig. 14.I.5 – Density vs. heavy phase rate with and without slippage
We can look at this plot from a last perspective. If we have a measurement of density, the
solution we should find (or that the non-linear regression will find) is seen graphically by
interpolating the relevant curve for this density value.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p518/558
For each zone, a non-linear regression is performed, and the result of this regression
presented graphically in the Zone Rates plot. The Y-scale can display density or holdups as
relevant. Below is an illustration in a 2-phase oil-gas situation with spinner and density. The
dashed line represents the measured density. The current solution is such that for the selected
correlation (Dukler) the predicted density value (horizontal dotted line) is similar to the
measured one (horizontal dashed line).
Each colored line represents a different correlation. As we did previously, one way to look at
this graph is to consider how different the rate solution will be depending on the selected
correlation… The rationale behind a proper selection is to start by ruling out the correlations
developed for situations not applicable to the particular test. Beyond this first elimination, it is
possible to check which correlation is the most consistent with additional information, surface
rates in particular. This selection is a very important step of the analysis, and at least should
be noted and justified. Software defaults will not be a sufficient excuse.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p519/558
8200
8300
8400
The QZI track represents contributions or injections and they are obtained basically by the
difference of the rate above and below an inflow. The Q track represents the application of the
non-linear regression at every depth frame.
The two match views show the comparison between target (red) and simulated (green)
measurements. The simulated curves are obtained by feeding at every depth the known rates
into the forward model, including in particular the slip correlation. The simulated curves and
the QZT logs are referred to as schematics in Emeraude.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p520/558
To avoid this, it is possible to solve for the entire well at once with a single regression, the
global regression, where the unknowns are the zone contributions, dQ’s. Since the
contributions are the direct unknowns we can impose some sign constraints upfront. For each
iteration the assumption of the dQ’s translates into a series of Q’s on the calculation zones,
which can be injected into the forward model to calculate the objective function. Here again
the objective function is evaluated on the calculation zones only; and to this error, other
components might be added such as a constraint using the surface rates.
Note that whether the regression is local or global the end result is only influenced by the
solution on the few user defined calculation zones. This is why we call this approach the
‘Zoned’ approach.
The only way to account for the changes seen on the data is to let the holdups differ from the
model prediction, locally, and at the same time to complement the objective function by a
term measuring how far the solution deviates from the slip model prediction. More precisely,
with the Continuous approach the Global regression can be modified as follows.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p521/558
The main regression loop is still on the contributions but the objective function considers
an error on the log points. In turn, at each depth, the simulated log values are evaluated by
running a second regression on the holdups to minimize an error made of the difference
between simulated and measured values, and at the same time, a new constraint using the
slip model holdup predictions. In cases where one can do without the slip model, this new
constraint is obviously not included.
4500
4600
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p522/558
This example is now interpreted with the Continuous approach and the Global regression
rerun. In the end, there are certain differences. The match looks better overall, but at the
expense of not honoring the slip models. The deviation from the slip models is indicated in the
rightmost track (line=model, markers=solution). Note that the number of depth samples on
the logs is user defined.
Depth Z Veloc ity matc hW ater holdup matc hDensity matc h Gas holdup matc h QZ T QZ I Slip v eloc ity matc h
... 0 DFHM ,I1 1 WFDE ,I1 [kg/ m3] 0 GHHM2 ,I1 1
m ... 0 DFHMZ -> ,I1 1 ... GHHM2Z -> ,I1 -0.02 m3/ sec 0.12-0.02 m3/ sec 0.06-4 m/ sec 8
4500
4600
14.I.9.b So?
In the above example there is no drastic benefit to use the Continuous method. But there are
cases where reverse will apply, for instance when the data is unstable between contribution
intervals. In this case, the location of the calculation zones may dangerously impact the Zoned
method results. Note however, that the continuous method is also influenced by the location of
the calculation zones inside the inflows, since the way the inflows are split has a direct
influence on the shape of the simulated logs over the inflows. Another situation where the
Continuous method might provide a better answer is with temperature, since the temperature
is essentially an integral response. In Emeraude the two methods, Zoned and Continuous, are
offered in parallel and you can switch from one to the other at any stage.
It is important to stress that a nice looking match does not necessarily mean a right answer. It
all comes down at some stage to the interpreter’s judgment. Also, any regression is biased by
the weights assigned to the various components of the objective function. Different weights
will lead to different answers; starting point will also be critical as a complex objective function
will admit local minima. So the Continuous approach is not a magical answer, and more
complex does not necessarily mean better. The Continuous approach is more computing
intensive and a bit of a black box.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p523/558
Another possible way of dealing with this situation is to use the temperature quantitatively as
one of the target measurements. This obviously requires a forward temperature model
representing the necessary thermal exchanges between the fluid, the reservoir, and the well.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p524/558
Horizontal wells are rarely strictly horizontal and unfortunately, slippage velocities hence
holdups are very sensitive to slight changes of deviation around the horizontal.
The pictures below taken in the Schlumberger Cambridge Research flow loop show this
dependence. A 50-50 rate mixture is flown with water and oil. A blue dye is injected in the
water and at the same time a red dye in the oil. At 90° both phases move at the same speed
and the holdups are 50% each. At 88°, i.e. going uphill by only 2 degrees, the oil flows much
faster and its holdup decreases significantly. Conversely, going downhill by 2° at 92°, the
situation is reversed with the water going faster.
It is not difficult to imagine that with undulations, the change from one behavior to the other
will not be immediate, leading to intermittent regimes such as waves. In this situation, the
response of conventional tools will most of the time be useless. Even if they were reliable,
slippage models that capture the physical behaviors in this situation are few.
The undulation of the wellbore will also create natural traps for heavy fluids in the lows and
light fluid in the highs. Those trapped fluids will act as blocks and obviously impact the tool
responses when they occur. A last condition worth mentioning is that the completion may offer
flow paths not accessible to the conventional tool, e.g. with a slotted liner and multiple
external packers for instance.
For all the above reasons, specific tools were developed, called Multiple Probe Tools, or MPT.
The goal with those tools is to replace a single value response with a series of discrete points
in order to better characterize the flow behavior, and ultimately to remove the need for
slippage models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p525/558
In a water continuous phase, current is emitted from the probe tip and
returns to the tool body. A droplet of oil or gas has only to land on the
probe tip to break the circuit and be registered.
The signal from the FloView probe lies between two baselines, the continuous water-phase
response and the continuous hydrocarbon-phase response. To capture small transient bubble
readings a dynamic threshold is adjusted close to the continuous phase and then compared
with the probe waveform. A binary water holdup signal results, which when averaged over
time becomes the probe holdup. The number of times the waveform crosses the threshold is
counted and divided by 2 to deliver a probe bubble count.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p526/558
Light emitted at a suitable frequency is fed down an optical fiber through a Y-coupler and
finally to an optical probe made from synthetic sapphire crystal. Light that does not escape is
returned via the Y-coupler to a photodiode and is converted to a voltage.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p527/558
The signal from the optical probe is at or below the gas baseline and at or above the oil
baseline. To capture small transient bubble readings a dynamic threshold is adjusted close to
the continuous gas phase and close to the continuous liquid phase. The threshold is then
compared with the probe waveform to deliver a binary gas holdup signal, which is averaged
over time. The number of times the waveform crosses the threshold is counted and divided by
2 to deliver a probe bubble count.
The operation principle of the holdup sensors is the one explained in the previous sections. The
calibration of the individual FSI spinners is pretty straightforward, as the spinners are
supposed to be in the same vertical location from one pass to the other.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p528/558
Fig. 14.K.8 – CAT; Courtesy Sondex Fig. 14.K.9 – Normalized values in [Gas-Oil]
then [Oil-Water] mixtures
The CAT uses 12 capacitance probes distributed on a circumference. As with any capacitance
sensor, the CAT probes discriminate mostly between water and hydrocarbons. The contrast in
dielectric values for oil and gas may however be used to differentiate the two fluids.
In a stratified environment the probe response can be used to get holdup values using 2 two-
phase calibrations as represented on the graph above. When the probe normalized response is
between 0 and 0.2 a gas-oil system is considered, and from 0.2 to 1, an oil-water system. To
solve in three phase without any assumption about the local holdups, a 3-phase response can
be used as shown below. This response is an extension of the previous graph that constitutes
the intersection of the surface with the side walls.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p529/558
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p530/558
(2) Combination of the local properties to get phase rates, and integration
Imagine for instance that we know holdups and velocity everywhere. Locally we can assume
that there is no slippage and at every location in the cross-section, calculate the phase rates as
the local velocity multiplied by the local holdup. By integrating the phase rates, we can get
average phase rates directly, and therefore produce a final result without the need for slippage
models.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p531/558
To correct the previous situation it is possible to alter the linear model using gravity
segregation constraints, i.e. imposing that water hold-up decreases from bottom to top, or gas
holdup increases from bottom to top.
A non-linear regression can be used to try and match the values and satisfy the constraints at
the same time. The result is shown below.
Fig. 14.L.2 – RAT mapping with the linear model; segregation constraint applied
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p532/558
Like MapFlo it is driven by 2 parameters. The main idea behind this model is to obtain the
velocity profile by applying a linear transformation of the holdup profile and then rounding the
profile near the pipe walls. More precisely, the velocity profile is obtained with the equation
below, regressing on and to match the velocity projections.
The next Figure shows how the vertical holdup and velocity profiles were obtained on an FSI
example with MapFlo and Prandtl combined.
The water holdup is 0 everywhere; the gas holdup profile is shown in red (see the Yg scale at
the bottom). The velocity profile is displayed with the yellow curve; one spinner was ignored in
this case. The squares represent the discrete measurements (blue=Yw, red=Yg, yellow=V).
Fig. 14.L.4 – FSI mapping in a deviated well with Mapflo and Prandtl
It should be noted that the Prandtl model rounds the edge of the velocity on the entire
circumference and not just at the top and bottom.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p533/558
Finally, the optimization can be based on several passes at the same time if the conditions are
stable from one pass to the other. When the tool rotates (MAPS, PFCS, GHOST) this provides
the ability to multiply the points of measurements in the cross-section at every depth and can
compensate for faulty probes.
14.L.3 Integration
The mapping allows integrating over the cross-section at every depth in order to get average
values.
; ;
As explained earlier, with the assumption of no local slippage, we can use the local velocity
and holdups to obtain phase rates.
; ;
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p534/558
14.L.4 Interpretation
The interpretation is conducted using the process outputs: holdups, phase rates, total velocity
together with any additional tool available. Even though the answers we are seeking are in
essence already provided (we have the phase rates everywhere) the interpretation is still a
required step to come up with actual zone contributions, possibly honoring additional
constraints (sign, surface rates).
Using the Continuous method is the best choice as it contains a built-in mechanism to by-pass
any slippage model on the recognition that enough information is supplied. Below is a typical
output example for an FSI job.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p535/558
14.M SIP
Selective Inflow Performance (SIP) provides a mean of establishing the steady state inflow
relationship for each producing layer.
The well is flowed at several different stabilized surface rates and for each rate, a production
log is run across the entire producing interval to record simultaneous profiles of downhole flow
rates and flowing pressure. Measured in-situ rates can be converted to surface conditions
using PVT data.
To each reservoir zone corresponds, for each survey/interpretation, a couple [rate, pressure]
used in the SIP calculation.
For the pressure, the interpretation reference channel is interpolated at the top of the zone.
For the rate, the value used in the SIP is the contribution. It is calculated for a given reservoir
zone as the difference between the values interpolated on the schematic at the top and the
bottom of that zone.
Fig. 14.M.1 – SIP example with 2 layers, 3 rates and a shut-in survey
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p536/558
Straight line:
or
Fetkovitch or C&n:
or
It is possible to generate the SIP after correcting the pressures to a common datum. This is
typically done using the shut-in pressure profile for the estimation of the hydrostatic head
between layers. A pressure corrected SIP highlights the eventual crossflow between layers.
www.EngineeringBooksPdf.com
Dynamic Data Analysis - v4.12.03 - © KAPPA 1988-2012 Chapter 14 – Production logging - p537/558
14.N Temperature
Temperature can be used quantitatively to replace a missing or faulty spinner provided an
adequate temperature model is available. This model should provide the temperature
everywhere in the wellbore, from an assumed distribution of contributions. Such a model
needs to capture the following:
Temperature changes occurring inside the reservoir (compressible effects, friction). Those
are often reduced to simply a Joule-Thomson cooling/heating but the reality is more
complex
It is beyond the scope of this chapter to describe the Emeraude temperature models in details.
There are currently two such models, the most advanced one capturing all the above effects by
solving numerically a general energy equation coupled to a mass balance equation. Below is an
example match in an apparent downflow situation.
Working with temperature requires a large number of inputs, typically: geothermal profile,
rock thermal properties, fluid heat capacities, thermal properties of the completion elements,
reservoir petrophysical properties, etc. If those parameters are not available, they become
additional degrees of the freedom of the problem leading to multiple answers. In any case, the
discrimination of the phases still needs fluid identification measurements like density or
holdups.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
TABLE OF CONTENTS
1 – INTRODUCTION .................................................................................................... 3
2 – THEORY .............................................................................................................. 19
www.EngineeringBooksPdf.com
2.E OUTER BOUNDARY CONDITIONS .............................................................................. 36
www.EngineeringBooksPdf.com
2.I.3.b Normalized pseudopressures .................................................................................. 64
3.C.3 Bourdet Derivative & Infinite Acting Radial Flow (IARF) ................................... 79
www.EngineeringBooksPdf.com
3.D.4.c Remaining limitations ............................................................................................ 93
www.EngineeringBooksPdf.com
3.I.4 Phase redistribution ...................................................................................... 124
3.J.3.b Integrating the material balance correction in the analytical model............................ 129
4.D.3.d Flowing gas material balance plot and P-Q diagnostic plot ........................................ 155
www.EngineeringBooksPdf.com
4.D.4 General major gas issues ............................................................................ 155
4.D.4.a Correcting the pressure to sandface ...................................................................... 155
www.EngineeringBooksPdf.com
6.C.2 Time dependent well model......................................................................... 179
6.F.4.b Sensitivity to the vertical distance to a constant pressure boundary .......................... 201
www.EngineeringBooksPdf.com
6.F.6 Adding wellbore storage ............................................................................. 202
www.EngineeringBooksPdf.com
7.A INTRODUCTION ............................................................................................... 231
www.EngineeringBooksPdf.com
7.H COMPOSITE RESERVOIRS .................................................................................... 268
www.EngineeringBooksPdf.com
8.C.2.b Build-up response ............................................................................................... 291
8.C.5 The case of shut-ins after a short production time ......................................... 293
8.D.6 Example of intersecting faults match with field data ....................................... 298
www.EngineeringBooksPdf.com
8.G.3 Semilog analysis ........................................................................................ 312
8.M.1 Can we see a fault in a build-up and not in the drawdown? ............................. 332
www.EngineeringBooksPdf.com
9.B PHASE EQUILIBRIUM ......................................................................................... 344
9.H.2 Wet gas and condensate single phase analog ................................................ 362
www.EngineeringBooksPdf.com
9.K.2.b Oil FVF ............................................................................................................... 370
www.EngineeringBooksPdf.com
10.F.1 Non-Darcy flow.......................................................................................... 404
10.F.2 Flow with Water and Hydrocarbons (Oil OR Gas) ............................................ 406
www.EngineeringBooksPdf.com
11.E.4 Other application of the numerical model ...................................................... 445
12.B.1.e Vogel Oil IPR for Solution gas drive reservoir ....................................................... 458
www.EngineeringBooksPdf.com
13.C WAVELET FILTRATION – AN OVERVIEW ..................................................................... 471
www.EngineeringBooksPdf.com
14.G SPINNER CALIBRATION AND APPARENT VELOCITY CALCULATION ......................................... 504
14.G.3 Velocity from calibration: threshold handling and Vapparent ........................... 507
14.G.3.a Threshold options ............................................................................................ 507
14.I.6 Emeraude Zoned approach and the Zone Rates plot ....................................... 518
www.EngineeringBooksPdf.com
14.K.4.b RAT: Resistance Array Tool ............................................................................... 529
www.EngineeringBooksPdf.com