Intelligent Oilfield - Cloud Based Big Data Service in Upstream Oil and Gas

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

IPTC-19418-MS

Intelligent Oilfield - Cloud Based Big Data Service in Upstream Oil and Gas

Xudong Yang, Oladele Bello, Lei Yang, Derek Bale, and Roberto Failla, Baker Hughes, a GE company

Copyright 2019, International Petroleum Technology Conference

This paper was prepared for presentation at the International Petroleum Technology Conference held in Beijing, China, 26 – 28 March 2019.

This paper was selected for presentation by an IPTC Programme Committee following review of information contained in an abstract submitted by the author(s).
Contents of the paper, as presented, have not been reviewed by the International Petroleum Technology Conference and are subject to correction by the author(s). The
material, as presented, does not necessarily reflect any position of the International Petroleum Technology Conference, its officers, or members. Papers presented at
IPTC are subject to publication review by Sponsor Society Committees of IPTC. Electronic reproduction, distribution, or storage of any part of this paper for commercial
purposes without the written consent of the International Petroleum Technology Conference is prohibited. Permission to reproduce in print is restricted to an abstract of
not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of where and by whom the paper was presented.
Write Librarian, IPTC, P.O. Box 833836, Richardson, TX 75083-3836, U.S.A., fax +1-972-952-9435.

Abstract
The Oil and Gas (O&G) industry is embracing modern and intelligent digital technologies such as big data
analytics, cloud services, machine learning etc. to increase productivity, enhance operations safety, reduce
operation cost and mitigate adverse environmental impact. Challenges that come with such an oil field digital
transformation include, but are certainly not limited to: information explosion; isolated and incompatible
data repositories; logistics for data exchange and communication; obsoleted processes; cost of support; and
the lack of data security. In this paper, we introduce an elastically scalable cloud-based platform to provide
big data service for the upstream oil and gas industry, with high reliability and high performance on real-time
or near real-time services based on industry standards. First, we review the nature of big data within O&G,
paying special attention to distributed fiber optic sensing technologies. We highlight the challenges and
necessary system requirements to build effective and scalable downhole big data management and analytics.
Secondly, we propose a cloud-based platform architecture for data management and analytics services.
Finally, we will present multiple case studies and examples with our system as it is applied in the field. We
demonstrate that a standardized data communication and security model enables high efficiency for data
transmission, storage, management, sharing and processing in a highly secure environment. Using a standard
big data framework and tools (e.g., Apache Hadoop, Spark and Kafka) together with machine learning
techniques towards autonomous analysis of such data sources, we are able to process extremely large and
complex datasets in an efficient way to provide real-time or near real-time data analytical service, including
prescriptive and predictive analytics. The proposed integrated service comprises multiple main systems,
such as a downhole data acquisition system; data exchange and management system; data processing and
analytics system; as well as data visualization, event alerting and reporting system. With emerging fiber
optic technologies, this system not only provides services using legacy O&G data such as static reservoir
information, fluid characteristics, well log, well completion information, downhole sensing and surface
monitoring data, but also incorporates distributed sensing data (DxS) such as distributed temperature sensing
(DTS), distributed strain sensing (DSS) and distributed acoustic sensing (DAS) for continuous downhole
measurements along the wellbore with very high spatial resolution. It is the addition of the fiber optic
distributed sensing technology that has increased exponentially the volume of downhole data needed to be
transmitted and securely managed.
2 IPTC-19418-MS

Introduction
A number of advanced monitoring and measurement sensor systems have been developed in the past
20 years to collect various types of downhole data. In general, downhole data includes high resolution
high frequency pressure and temperature, in-well multiphase flow rate and phase cut, distributed sensing
data from distributed acoustic sensors (DAS), distributed temperature sensors (DTS), distributed strain
sensors (DSS), discrete distributed temperature sensors (DDTS), time-lapse seismic, electrical potential and
production logging tools. There are several technologies that can be used in monitoring and measurement
sensor systems such as electronic sensors (pressure and temperature, discrete temperature, single and
multiphase flow meters, streaming potential, permanent 3D resistivity based on electromagnetic method,
permanent downhole seismic); fiber optic sensors (pressure and temperature, discrete temperature and
discrete strain sensors, single and multiphase flow meters); permanent downhole seismic sensors, distributed
temperature and distributed strain sensors; distributed acoustic/vibration sensors; distributed pressure
sensors (Almulla, 2012,; Duru and Horne, 2010; Williams, et al., 2015). The application areas can be
grouped into the following categories: condition monitoring, well performance, well stimulation, flow
assurance, advanced completions and reservoir characterization.
Downhole big data is characterized by multi-source heterogeneous data, widely distributed, dynamic
growth data mode. More so, downhole big data is defined by volume, variety, velocity, veracity, variability,
visualization and value where traditional data processing methods and tools cannot be qualified. Volume
means vast amount of data is generated daily. This makes most datasets too large to store and analyze using
traditional technologies. Downhole big data is a special dataset that present many opportunities as well as
many tough challenges. The first challenge is the massive amount of data. Though the volume downhole big
data may not equal to those generated by traditionally data-intense industries, the large amount of data also
present a big challenge for the petroleum industry. This challenge is not only reflected in the storage side,
but more importantly in the analysis and processing of the downhole big data. The velocity or speed refers
to how fast the data is generated, stored, analyzed and visualized. Data processors require time to process
the data and update the databases. In the era of downhole big data, real-time data is generated continuously
which is a challenge. For the many real-time tasks in intelligent well and reservoir management systems,
such as flow assurance, condition monitoring, well stimulation, advanced completions well performance,
reservoir characterization, reservoir production operation performance using dynamic data or model-driven
algorithms that need many hours or time to run are not competent (Lumens, 2014; Li et al., 2011; Moreno,
2014; Ramos, 2015). Businesses and organizations can extract value from very large volumes of a wide
variety of downhole big data by enabling online real-time operations. Variety means the increasing complex
data types and data formats.
The increasing utilization of big data in petroleum industry to support dynamic data and model-
driven decision making is driving a demand for performance enhancement. Data analysis using traditional
relational database systems with the use of parallel cluster computing and/or grid computing is expensive
to scale. When it comes to the big data problem, much attention has been paid to cloud computing for on-
demand self-service, ubiquitous network access, location transparent resource pooling, rapid elasticity, and
measured service pay per use. The cloud architecture can be divided into four categories: private, public,
hybrid and community clouds. Numerous papers and technical literature dealing with cloud computing
applications have been reported in the public domain (Bello, et al., 2014; Bello, et al., 2016). Cloud
computing services are categorized into three standard types: infrastructures as a service (IaaS), platform
as a service (PaaS), and software as a service (SaaS).
Although there are some big data management and analytics platforms available in the market, they are
not widely deployed in the petroleum industry because of the wide gaps between these platforms and the
special needs of the industry (i.e., data types, real-time big data processing, etc.). In this paper, we propose
the use of the private cloud computing model to provide scalable data store and analytics service platform for
IPTC-19418-MS 3

fiber-optics based monitoring systems. Our proposed solution is based on Apache Kafka for data ingestion,
Apache Spark for in-memory data processing, Apache Cassandra for storing raw and processed results, and
INT Geotoolkit library for visualization.

Cloud-Based Big Data Management and Analytics Service Platform


Benefits
The computational and data-intensive challenges presented by the requirements of fiber-optics based
downhole monitoring systems can be met using a cloud computing approach. Given the availability of
high performance computing (HPC) resources and big data tools, the cloud computing system delivers the
following benefits to customers:
1. Cost Savings: Significantly reduce capital investment with zero in-house hardware. Eliminate the cost
of hardware ownership including building, power, maintenance etc. Eliminate the software ownership
and cost for upfront license, upgrade, IT support and maintenance.
2. Reliability: A massive pool of redundant IT resources, such as distributed cloud-based database, as
well as quick failover mechanism, most cloud-based service offer a Service Level Agreement (SLA)
which guarantees 24/7/365 and 99.99% availability.
3. Manageability: Vender manages infrastructure and SLA backed agreements, provides enhanced and
simplified IT management and maintenance capabilities through central administration of resources.
You enjoy a simple web-based user interface for accessing software, applications and services –
without the need for installation - and an SLA ensures the timely and guaranteed delivery, management
and maintenance of your IT services.
4. Strategic Edge: Cloud computing allows you to forget about IT technology and focus on your key
business activities and objectives. It can also help you to reduce the time needed to market newer
applications and services. Ever-increasing computing resources give you a competitive edge over
competitors

Requirements
Our goal in developing a cloud-based big data management and analytics service to support all O&G
upstream big data including fiber optics data acquisition, management, processing (analytics) and
visualization, presenting in a simple, interactive, and easily accessible platform. The following requirements
motivated the implementation:
1. Expandability, Scalability and Portability – The system should be able to handle any number of
downhole data sources including pressure and temperature gauges (PDG), downhole flow meters
(DHFM), distributed temperature sensors (DTS), distributed strain sensors (DSS), distributed acoustic
sensors (DAS), time-lapse seismic, production logging tools (PLT), and others. Scalability refers to
the system being capable of easily handling an increase in its subscribers. Portability means that it
should be possible to make the system work anywhere and hence its design must be platform and
operating system independent. So the system can be deployed global, in country, or in company to
support different data governance policies.
2. Vender Neutral –The system should be able to handle data from any type of downhole sensors and
hence deal with the various data formats the sensor signals might generate. The system needs to be
vendor neutral by supporting industry standard data formats including PRODML, WITSML, LAS,
OPC etc. for data exchange between data sources and data center.
3. Web-based Solution – This allow user to access their data and solution through simple web browser
(PC or mobile device) anywhere anytime. The architecture should provide web browser based simple
clear GUI for standard applications and web service interfaces for third party tool to directly retrieve
4 IPTC-19418-MS

data from the cloud data server, using industry standard PRODML for fiber optic distributed sensing
data (DTS, DAS and DSS etc.) and OPC for real-time and historic PI tag data.
4. Secure Data Management – The system must provide secure data communication and management
to ensure the integrity and security of customer's data. User can only access their asset information
and data through proper entitlement. All data are transferred and exchanged through secured
communication channels.
5. Options of Connectivity – The architecture must have the provision and flexibility for downhole
instrumentations with indirect network connectivity, direct network connectivity, and instrumentation
with no connectivity. The instrumentation with indirect network connectivity means the downhole big
data hardware is first connected directly to the corporate network (LAN), then transfer to the cloud.
Usually there is a network firewall between the device and the corporate servers, and special security
requirements may be needed for sending the data over the firewall, both from field devices and to the
cloud server. Instrumentation with direct network connectivity means the downhole big data hardware
has network connectivity directly to the cloud through wireless network. Such is the case with very
remote devices with a cellular or satellite data communication service. The device has a unique IP
address that is visible on the internet. The instrumentation with no connectivity highlights a situation
where the downhole hardware has zero connectivity and all data is stored locally and later delivered
to a super user in the corporate environment.
6. Data Recovery – With either direct or indirect network connectivity, the data server must support
multiple communications protocols. In addition to polling data from field devices, the system must be
able to update polling engine subsystem and provide services to read buffered data logs in the event
of communication loss or planned maintenance outages between the data server and the distributed
data sensing devices
7. Time Synchronization – The system should be able to simultaneously process multiple, time-varying
data sequences that might be sampled at different rates, which represent measurements of various
attributes of the monitored system. Time synchronization becomes a very important issue while
dealing with multi-stream data
8. Other system requirements are as follows:
○ Data Validation: Ensure data is valid by associating measured data with corresponding
instrument metadata such as calibration and configuration parameters.
○ Unit: Data is stored in internal standard unit system but support different unit systems for data
import and export.
○ Time Zone: Real-time data is stored with coordinated universal time (UTC) time and support
user desired time zone for data import and export.

System Architecture
Fig. 1 presents the system architecture and data flow for the cloud-based Big Data Management and
Analytics service platform. The system architecture consists of three modules (1) data acquisition and
communication module (2) data processing module via cloud computing platform (3) data visualization
and web application module. The data acquisition and communication module describes the data pipe from
the wellsite to the cloud can be made directly through a satellite or cellular link, or indirectly through the
operator's internal network. The data acquisition and communication module uses open architecture that
enables third party connectivity.
IPTC-19418-MS 5

Figure 1—Execution Flow of the Fiber Optics DTS/DSS/DAS Data Management & Analytics Solution

The cloud computing platform pulls the data from individual wells directly through a satellite or
cellular link, or indirectly through the operator's internal network. This system is in charge of several
relevant functions: data processing, data storage, event detection, diagnostics and prognostics, probabilistic
forecasting-optimization for improved decision making, as well as KPIs and alarm reporting. The proposed
system relies heavily on Apache Cassandra for its DTS data model. In the proposed system, the same
schema is used for streaming and historical data. Apache Cassandra is a fully distributed decentralized
NoSQL database that provides high availability of DTS data, ease of operations and easy distribution of data
across multiple data centers builds on top of cluster. Apache Cassandra architecture makes it easy scalable
and highly available. Instead of traditional master-slave architecture, Apache Cassandra uses peer to peer
distributed architecture in which each node of the cluster is identical to other nodes in a cluster. The system
offers different ways to archive data: (1) Automatic 1: when data becomes of certain age (configurable
by the administrator) it is OK to take it offline (2) Automatic 2: when data becomes of certain age, most
of it is archived however any representative and/or averaged logs are left behind (3) exceptions: archival
process to ignore any downhole big data (measurements or interpretation logs) that have a specific tag(s)
configured by an administrator to ignore archival so that they are never archived due to their business
relevance. There exist multiple tools that allow us to write parallel and distributed applications such as
Apache Hadoop, Apache Spark, and Apache Storm. For our proposed pipeline, we have used Apache Spark
as a distributed and parallel processing system. Apache Spark is in-memory cluster computing framework
that was initially developed to run iterative algorithms based on machine learning. Apache Spark stores all
intermediate results in memory rather than storing them on a disk, which makes it 100 times faster than the
Apache Hadoop. The resilient distributed datasets (RDDs) are partitioned collection of records that cannot
be changed once created. RDDs can be created from input datasets or from applying some operation on
existing RDDs. There are two different types of operations that can be performed on RDDs, one is called
transformation and the other is called actions. The RDDs can be cached in memory on worker nodes in
case of iterative machine learning algorithm so that the computed value from the previous iteration cannot
be recomputed again.
The data visualization system and web application enables end-user to get access to the system
information by connecting to the cloud platform. Using this application, the end-user can visualize state
variables, model parameters; consult historical information that have been saved, see events that have been
detected, diagnosed and predicted by the platform as well as interact with the system. The visualization
module comprises the functional modules for downhole big data visualization that can be accessed from
the homepage. The downhole big data visualization tool was implemented as a web application with web
6 IPTC-19418-MS

browser-based GUI. The web application consists of a user interface layer, a web framework, data storage
and visualization. The user interface layer was implemented primarily using HTML5 and JavaScript, which
function in any modern web browser. This makes the downhole big data visualization cross-platform
compatible for users on any device with a web browser. The web framework layer uses a standard web
framework with interchangeable and scalable components as a mediator between the front-end visualization
layer and the underlying web server. Because it is Python based, it can run on multiple server platforms
(Windows or Linux) as well as a variety of web servers.

Data Communication and Data Flow


There are 2 types of data connectivity for real-time downhole big data between wellsite and cloud platform:
Indirect network connection or direct network connection. The first one directly connect the wellsite
instruments to the cloud through cellular or satellite wireless modem; the second one sends the data to the
customer's corporate data server and then send to cloud platform.
The typical two-way flow of data between the asset, customer and the cloud environment is depicted in
Fig. 2. Data generated at the wellsite by downhole or surface sensors, operational activity, or other logging
measurements is transmitted to the cloud via industry standard formats (e.g., ETP, WITSML, PRODML, or
OPCUA). The data pipe from the wellsite to the cloud can be made directly through a satellite or cellular
link, or indirectly through the operator's internal network as shown in the figure. Once the link is established,
data is securely stored at two synchronized 24/7 centers strategically located on different continents. Once
the data is in the cloud, task-based software can be developed through the Application Layer shown in
Fig. 2. Implementation of such applications necessitates functionality be built into the Analytics Library,
as well as the web-based programing to facilitate the user interface. Fig. 2 illustrates real-time dashboards
and reporting, alerting and messaging, as well as raw data downloads as a few examples of possible output
from the application layer that may be sent back to the monitoring center or wellsite.

Figure 2—Data flow between field device, cloud server and customer
IPTC-19418-MS 7

Data Analytics
Critical to the value generated by any automated workflow that is enabled by the cloud-based system in
Fig. 2 is the data analytics engine. Within our cloud-based platform this is embodied by the Analytics
Library. The functionality requirements on the engine are quite broad. For example, an alerting application
may require only standard and very efficient signal-processing methods, while a production optimization
application may require fully transient flow solvers, working in conjunction with computationally intensive
optimization algorithms. Because of this, we chose to develop the analytics engine as a standalone software
product written in C++. An immediate advantage of this choice is that the Analytics Library can be
developed and maintained separately from the web-based software – the three main benefits being:
1. Different teams can manage the analytics and web interface – efficient development of application-
specific analytics requires oilfield expertise and a technical skill set that are usually different from
those required by web and network programming
2. Test suites used to ensure quality assurance and control can be developed specifically for the Analytics
Library and independent of the web-based software platform (though there will always be integration
testing within the cloud environment)
3. The Analytics Library can be re-used as a generic computational engine in other desktop applications
in wellsite where the network is limited
Due to the nature of the physical measurements made by fiber-optics downhole distributed sensing
systems, a wide range of downhole analytical services can be employed based on physics-based models,
specifically in an inverse manner. As shown in Fig. 3, both downhole and surface measurements such
as temperature, pressure, acoustics, strain, surface composition can be obtained either from downhole
distributed/discrete sensors or surface measurements. However, in most cases, the true value for customers is
embedded within data not directly available from the measurements themselves. Examples include perhaps
an injection or production profile, an acid placement log, a well integrity report, a flow estimation, or a water
and/or gas cut report. Therefore, we must develop a customized physics model based on the data available
and certain valid assumptions to map the direct measurements to the value of interest for customers. This
process is depicted in Fig. 3, and the steps of the inversion workflow are described as followed. First, start
the fast forward physics model with an initial guess of the values of interest. Next, compare the forward
physics model output with the measurement. If the discrepancy between the forward model output and the
measurement is over a predefined threshold, the forward model is calculated with an adjusted guess of the
values of interest. This iteration continues until the discrepancy between the forward model output and the
measurement is below the pre-defined threshold.

Figure 3—Cloud-based model driven analytics plug in


8 IPTC-19418-MS

Another workflow for the analytics engine involves the combination of physics-based modeling and
machine learning algorithms. This embodies a flexible workflow depending on the problem at hand and the
nature and availability of the measurements. Fig. 4 describes one possible workflow when the response of
the forward physical model is not directly available. This work flow is used when the direct measurements
are not what are required for the physics-based model, or in cases where the downhole environment is too
complex to model physically. Machine learning techniques are first used to map the direct measurements
into some form of indirect measurements, which are then fed into the inverse workflow to derive the values
of interest to customers.

Figure 4—Cloud-based hybrid analytics plug in

Because the analytics library is developed separately, it requires integration into the cloud environment
upon release. This is accomplished using Protocol Buffer Technology developed by Google [10] as shown in
Fig. 5. A protocol buffer is a platform- and language-neutral extensible mechanism for serializing structured
data, like XML, but smaller, faster, and simpler. It is widely used to communicate between services,
especially in developing programs to communicate with each other over a wire or for storing data. Input
data needed by the analytics library is accessed from the cloud storage, serialized into a protocol buffer
message, and passed into the library. Inside the library, the protocol buffer message is de-serialized and
the input data are parsed from the message and used in numerical computation. After the calculation is
complete, the results are then serialized and passed via protocol buffer back to the data platform, where the
results are then further processed using web-based programming, such as triggering alerts, and displayed
in the front end or stored in the database.

Figure 5—Interfacing analytics library and the cloud-based data model using the protocol buffer
IPTC-19418-MS 9

Data Visualization
Figs. 6 show co-visualization for fiber optics based data management web application. Combining fiber-
optic distributed temperature sensing (DTS), distributed strain sensing (DSS) and distributed acoustics
(DAS) data with other surface and downhole information can provide the insight you need to enhance
production and make more informed operational decisions. In this dashboard, the interactive web interface
lets you see temperature, strain, and acoustic readings across time and depth in each asset, together with other
auxiliary information such as well schematic, real-time PI tags, well logs and well trajectory etc., enabling
quick identification of trends, patterns, and anomalies that can be used to diagnose downhole conditions,
enhance production, and improve overall recovery.

Figure 6—Integrated visualization dashboard using downhole DTS/DSS/DAS data

Use Cases
There are many applications of the developed and implemented cloud-based downhole fiber optics data
management and analytics service platform. This section presents results of some of the field case studies.
Gas Lift Valve Performance Status Surveillance in HPHT Wells. One of the first SaaS big data analytics
we developed and provided to our customer on our cloud-based big data platform is automatic gas lift system
alert based on DTS measurement in real time. For gas lifted wells, gas is injected into the well annulus,
flowing down to the designated gas injection point and into the tubing where the injected gas lightens the
fluid column, lowering flowing bottomhole pressure and increasing production rate. For asset managers, it
is crucial to know whether the injected gas enters the tubing at the designated lifting point or not, which is
one of the most important surveillance parameters to monitor the effectiveness and efficiency of the gas lift
system. We developed and deployed a real-time analytics service on our cloud-based platform to detect gas
lift mandrel status and annulus brine level based on physical interpretation of the DTS data. It is applied
on 10 gas lift wells in Gulf of Mexico (GOM) to automatic detect the gas lift status change in each well in
real-time, send out the alert to subscribed customers on the changes that pass over some pre-set thresholds.
Base on customer's evaluation, they have concluded: We can have 1000 bopd "easy gain"s from the 10 wells
reviewed. Fig. 7 shows the frontend dashboard on the cloud-based platform to display the most recent status
of the gas lift mandrels for one customer well. On the left most is the wellbore schematic diagram. In the
middle of the dashboard shows the waterfall plot of DTS data over a user-configurable period of time. On
10 IPTC-19418-MS

the right side shows a plot of DTS trace picked by user or by default streamed to the data platform most
recently. At the bottom of the dashboard is an axis which is capable to show temporal DTS data at a user
selected depth and/or any PI data such as injection rate. All these plots are properly synchronized either in
depth or time so that users can zoom in any of them and get updated plots in all others. Whenever a new
DTS trace data is uploaded to the data platform, the analytics library is automatically called to determine
the status of the gas lift mandrels and the results are presented to user using the color coded markers and
legendary text, as shown in the figure. Users can get the gas lift mandrel status in real time whenever they
login to the frontend dashboard remotely. For users who cannot always watch the dashboard, the analytics
library results can be used to configure the alert system on the data platform so that subscribed users will
be notified by email or text whenever a status change occurs which may require user's attention. Fig. 8
shows an example alert email, which contains detailed information of the status change. Fig. 9 presents the
capability of the cloud-based big data analytics service to help customers to easily visualize the historical
movement of the brine level in the A annulus during an injection test. The computation of temperature
gradients in depth is invoked by the customer from the dashboard, which helps the customer to identify static
or slowly varying downhole thermal features such as mudline, wellbore construction features, etc., which
are otherwise difficult to see from raw DTS plot. In the example shown in Fig. 9, the spatial temperature
gradient is computed and plotted for a period of DTS data during which an injection test was conducted on
the gas lift well. From the spatial temperature gradient plot, it is easy to see that the brine level continuously
moves down the A annulus and stops until it reaches the next gas lift mandrel as the injection rate ramps ups

Figure 7—Gas Lift monitoring dashboard

Figure 8—Example gas lift alert message email


IPTC-19418-MS 11

Figure 9—STG showing brine level moving in the A annulus during injection test

Automatic structural health monitoring for early detection of reservoir compaction (deformation) during
water injection operations using DSS data. Distributed Strain Sensing (DSS) technology enables high
resolution (~ centimeter) strain measurement along the fiber deployed to the wellbore. It can be used to
monitor strain evolution for wellbores and near-wellbore formations due to reservoir compaction changes
caused by injection and/or production. Within our cloud-based big data platform, DSS measurements in
conjunction with surface operational data provides valuable insight into the impact of the surface operation
on downhole wellbore and formation integrity and automatically fire alerts if potential well integrity risk
is detected. Fig. 10 highlights the continuous DSS data for an injection well, from which the correlation
of the axial strain evolution of the wellbore and the operational history of the injector and even its nearby
injector can be easily identified.

Figure 10—Axial strain evolution over 6 months period

DAS FBE Data Management & Analytics for Real-Time Monitoring of Injection and Production Wells.
DAS data management on cloud-based platforms has been challenging because of its size and transmission
rate. Frequency band energy (FBE) has been adopted by our cloud-based big data platform for real time
streaming of DAS measurement because of its greatly reduced size and retained acoustic information.
Efforts have been spent to expand the capability of the cloud-based data platform to support the
management of the FBE data. Fig. 11 demonstrates the current progress of the platform to store and
visualize FBEs for different frequency bands. Comparison of the two plots in Fig. 11 demonstrates that same
12 IPTC-19418-MS

downhole event exhibits different features in different frequency bands. Due to the wealth of information
contained in the DAS FBE data, a wide of algorithms are being developed to solve different applications,
such as production profiling, injection monitoring, leak detection, frac monitoring, and so on.

Figure 11—Left: Energy consentract at 3 location at FBE band (2.86 ~ 10.11


Hz); Right: Energy consentract at 1 location only at FBE band (100 ~ 500 Hz)

Flow Profiling using DTS and PLT Data. DTS has been widely used for flow profiling. This case study
shows a vertical gas well with depth about 2700 meters. The total gas production is about 34.1 MMSCFPD.
Production zones ranges from 2427 meter to 2680 meter for a total of 253 meters. DTS is used to calculate
the gas production profiling. Fig. 12 shows the DTS temperature representing by the blue line. The brown
line is geothermal temperature. Due to the Joule-Thomson cooling effect, the DTS measurements is lower
than the geothermal temperature at the most of the time. Thermal analysis is performed and the pink line
shows the best match to the DTS measurements. Corresponding to the pink line, the gas distribution is then
plotted. From Fig. 12, the top zone contributes about 80% of the total production while the middle zone and
the bottom zone contribute about 9& and 11%, respectively.

Figure 12—Gas productionn profiling using DTS data


IPTC-19418-MS 13

In another case study, traditional health performance monitoring (production logging tool) schemes was
used to provide production data on the current status of the well system. However, with the traditional
system it was not feasible to analyze large amount of information to perform continuous, round-the-
clock production performance monitoring and detect events or abnormal conditions to be generated in a
particular look ahead period of time to prevent production loss. The client needed an automated production
performance forecasting and production risk management solution that was reliable and scalable and could
be deployed easily within the existing IT infrastructure. The newly developed system was used to read data
from multiple databases, including reservoir, completion and historical and downhole production data and
interpreted this data to generate information about current and future operational conditions of the monitored
wells. Spatial distributions of measured and predicted oil and water flow rates are illustrated in Figs. 13.
For most of the zones or layers, the error ranges within error index of +2%. However, there are some zones
or layers with up to error margin of +5% for the model predictions.

Figure 13—Multi- Zones Flow Estimation & Allocation Analytics

Data-Driven Machine Learning Model for Flow Estimation & Forecasting in Well Management. In this
section we present our data-driven machine learning model to estimate oil and gas flow rates in multiphase
production wells. The computational intelligence, or machine learning techniques used in this study are:
Principal Component Analysis (PCA), Regression methods (Linear Regression and Least Median Squares)
and Hidden Markov Models. PCA is a common machine learning technique for dimensionality reduction.
Given a possibly large dataset or matrix of relatively high dimension, PCA transforms the data into its
significant or principal component performing dimensionality reduction by ignoring components with small
eigenvalues. Our intention for using PCA in this work was to see how dimensionality reduction impacts the
quality of prediction. In particular, we reduce the given dataset (source input) from its original 11 dimensions
14 IPTC-19418-MS

to 8 dimensions by calling the PCA method in the machine learning library. The resulting output was later
fed as input to the linear regression (LR) method. To build the hybrid computational intelligence model
for estimating oil and gas flow rates, we used historical production data from a field which consists of
multiple stacked pay zones which are turbidity in nature. The data points are from downhole gauges in the
six wells. For this study, the input parameters used are flowing bottomhole pressures, flowing bottomhole
temperatures, tubing pressures, tubing temperatures, choke opening position, gas-oil ratio, oil-water ratio
and API gravity while the output parameters are oil flow rate and gas flow rate. The ratios of training,
validation and testing data to total data are 60%, 20% and 20%. The scatter diagram (cross plots) in Fig.
14 compare the measured flow rates obtained against the estimated flow rates obtained from the hybrid
computational intelligence models. As shown in the Fig. 14, a tighter cloud points about the 45 degree
line was obtained by the hybrid computational intelligence models. A small error between the measured
and predicted suggests that one hybrid computational intelligence model can achieve satisfactory predictive
accuracy across multiple wells in a field rather than having a separate model for each individual well. The
use of generalized model for multiple wells reduces computational cost and time as compared to developing
single computational artificial intelligence model for every single well. Improved well flow rate estimation
can be attained for the integrated machine learning algorithms if the non-optimality of the PCA parameters
is enhanced.

Figure 14—Comparison between Predicted and Measured Oil / Gas Flow Rates

Conclusions
Downhole big data including emerging fiber optic distributed measurement (DTS, DAS, and DSS)
along the wellbore provides a wealth of real-time information for O&G operation and production.
They provide enormous benefits and enhance efficiency, accuracy, reliability, and performance in O&G
upstream applications. Examples of these applications include production profiling; injection monitoring,
well integrity and reservoir compaction monitoring; stimulation and fracturing diagnostics, thermal flood
monitoring, as well as real-time closed-loop production management and control systems. The growth of
such applications has caused an emergence of business workflows that depend on integrating distributed
fiber data with other surface and downhole data to make timely and well-informed decisions. These
processes typically include data aggregation, data transfer, data security & management, data processing &
storage, followed by quantitative analytics of large volumes of data. The decision making based on such a
process has led to the advent of the digital oilfield phenomenon in the petroleum industry.
This paper has provided an overview of the digital technology that is driven by needs of the oil and gas
industry for reducing costs and performance enhancement for downhole big data management, processing
and analytics. Some of the major challenges and/or obstacles to overcome have been highlighted. In this
paper, we presented the development, implementation and field applications for a new generation of cloud-
based big data infrastructure & architecture for real-time downhole data management and analytics. The
IPTC-19418-MS 15

use of web services for interaction between this system and client-side applications was also discussed.
The web service is used by our customers in their third party software tools and it was shown that the web
services provide an efficient method for storage and retrieval of distributed sensing data and time series
tag from the database.
Case studies are presented that demonstrate successful field testing to verify the functionalities of the
newly developed system in many O&G upstream applications using real-time and historic downhole big
data. Users can access the data and analytic services anywhere, and anytime from the server. Such analysis
of large amounts of sensor data will enable operators to bring substantial improvements to operations, in
particular by making timely and accurate decisions. These initiatives have the potential to provide a new
generation of downhole big data communication, information management and analytics technologies that
can significantly increase profit and reduce costs, thereby strengthening the economic performance and
competitiveness of the petroleum industry.

Reference
1. Almulla, J.M. 2012. Utilizing Distributed Temperature Sensors in Predicting Flow Rates in
Multilateral Wells. Ph.D. Dissertation, Texas A & M University, College Station.
2. Duru, O. O. 2011. Reservoir Analysis and Parameter Estimation Constrained to Temperature,
Pressure and Flow rate Histories. Ph.D. Dissertation, Stanford University, Palo Alto.
3. Williams, T., Lee, E., Chen, J., Wang, X., Lerohl, D., Armstrong, G., Hilts, Y. 2015. Fluid Ingress
Location Determination Using Distributed Temperature and Acoustic Sensing. SPE Paper #
173446, Paper Presented at the SPE Digital Energy Conference & Exhibition. The Woodlands,
Texas, USA, 3-5 March.
4. Lumens, P.G.E. 2014. Fiber optic sensing for application in oil and gas wells. Ph.D. Dissertation,
Technical University of Eindhoven.
5. Li, Z., Yin, J., Zhu, Ding, Datta-Gupta, A. 2011. Using Downhole Temperature Measurement
to Assist Reservoir Characterization and Optimization. Journal of Petroleum Science and
Engineering, Vol. 78, pp. 454–463.
6. Moreno, J. A. 2014. Implementation of the ensemble Kalman filter in the characterization
of hydraulic fractures in shale gas reservoirs by integrating downhole temperature sensing
technology. M.S. Thesis, Texas A & M University.
7. Ramos, J. E. 2015. Reservoir characterization and conformance control from downhole
temperature measurements. M.S. Thesis, University of Stavanger.
8. Bello, O., Ade-Jacob, S., Yuan, K. 2014. Development of Hybrid Intelligent System for Virtual
Flow Metering in Production Wells. SPE Paper # 167896, Paper Presented at the SPE Intelligent
Energy Conference & Exhibition, Utrecht, The Netherlands, 1-3 April.
9. Bello, O., Ji, M., Denney, T., Lazarus, S., Vettical, C. 2016. A Dynamic Data-Driven Inversion
Based Method for Multi-Layer Flow and Formation Properties Estimation. SPE Paper # 181025,
Paper Presented at the SPE Intelligent Energy Conference & Exhibition, Aberdeen, Scotland,
United Kingdom, 6-8 September.
10. https://developers.google.com/protocol-buffers/

You might also like