0% found this document useful (0 votes)
2 views

MachineLearningAssistedSeismicInterpretation_UserGuide

The document is a user guide for the Machine Learning for Petrel 2023.3 software platform, focusing on machine learning-assisted seismic interpretation. It covers topics such as fault prediction, extraction processes, and validation techniques, emphasizing the importance of high-quality training labels for effective machine learning outcomes. Additionally, it provides detailed instructions on utilizing various tools and features within the software to enhance seismic data analysis and interpretation.

Uploaded by

erfanazizi13789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

MachineLearningAssistedSeismicInterpretation_UserGuide

The document is a user guide for the Machine Learning for Petrel 2023.3 software platform, focusing on machine learning-assisted seismic interpretation. It covers topics such as fault prediction, extraction processes, and validation techniques, emphasizing the importance of high-quality training labels for effective machine learning outcomes. Additionally, it provides detailed instructions on utilizing various tools and features within the software to enhance seismic data analysis and interpretation.

Uploaded by

erfanazizi13789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Petrel

E&P software platform

Machine Learning for Petrel 2023.3


Machine Learning Assisted Seismic
Interpretation
Release N

User Guide
Version 2023.3.1.0
Copyright Notice
Copyright © 2023 SLB. All rights reserved.
This work contains the confidential and proprietary trade secrets of SLB and may not
be copied or stored in an information retrieval system, transferred, used, distributed,
translated or retransmitted in any form or by any means, electronic or mechanical, in
whole or in part, without the express written permission of the copyright owner.

Trademarks & Service Marks


SLB, Schlumberger, the SLB logotype, and other words or symbols used to identify the
products and services described herein are either trademarks, trade names or service
marks of SLB and its licensors, or are the property of their respective owners. These
marks may not be copied, imitated or used, in whole or in part, without the express
prior written permission of SLB. In addition, covers, page headers, custom graphics,
icons, and other design elements may be service marks, trademarks, and/or trade
dress of SLB, and may not be copied, imitated, or used, in whole or in part, without the
express prior written permission of SLB. Other company, product, and service names
are the properties of their respective owners.
ECLIPSE® is a mark of SLB.
An asterisk (*) is used throughout this document to designate other marks of SLB.
Security Notice
The software described herein is configured to operate with at least the
minimum specifications set out by SLB. You are advised that such mi nimum
specifications are merely recommendations and not intended to be limiting to
configurations that may be used to operate the software. Similarly, you are
advised that the software should be operated in a secure environment
whether such software is operated across a network, on a single system
and/or on a plurality of systems. It is up to you to configure and maintain your
networks and/or system(s) in a secure manner. If you have further questions
as to recommendations regarding recommended specificatio ns or security,
please feel free to contact your local SLB representative
Contents

1. ML Fault Prediction ....................................................................................................................................................................... 1


1.1 Machine-learning-based fault prediction .................................................................................................................... 1
1.2 Use the User-trained fault prediction ............................................................................................................................. 2
1.3 Labeling strategy for user-guided training................................................................................................................... 4
1.4 Validate the fault prediction output ................................................................................................................................. 6

2. Fault Extraction............................................................................................................................................................................... 7
2.1 Fault extraction process...................................................................................................................................................... 7
2.2 Perform Fault extraction based on fault prediction result ....................................................................................... 7
Evaluate ...................................................................................................................................................................................... 9
Extraction parameters ........................................................................................................................................................ 11
Advanced parameters ....................................................................................................................................................... 12
Post-processing................................................................................................................................................................... 13
Fault extraction tools .......................................................................................................................................................... 14
2.3 Edit Fault extraction results ............................................................................................................................................. 18

3. ML Horizon Prediction .............................................................................................................................................................. 27


Machine learning based horizon prediction ..................................................................................................................... 27
Use Horizon prediction ............................................................................................................................................................. 31
Horizon prediction attributes and model stored with seismic horizon .................................................................... 33
1. ML Fault Prediction

1.1 Machine-learning-based fault prediction


You can use Machine learning (ML) based fault prediction to predict faulted
discontinuities within 3D seismic data volumes.
Prediction capabilities are performed by what is referred to as a prediction

neural networks (CNN), where the system is trained to identify faulted


discontinuities using a series of expertly labeled seismic images with faults
positively identified, also known as training labels. When trained, a prediction
model can then be used to perform predictions elsewhere, at unseen locations,
within a given dataset.
These techniques enable the prediction of faults on a voxel-by-voxel basis,
output as a fault probability attribute cube, revolutionizing the concept of a
seismic based fault cube. This enables entirely new levels of accuracy and
efficiency that provides a robust basis for fault interpretation, extraction,
modeling, and validation.

Figure 1: Access to the user-trained fault prediction module.

• Training data or labels, used to train the prediction model, are


interpreted by the user.
• You can use a fault prediction cube, to analyse the main fault
trends and optimize your training labels.

1
Figure 2: User-trained fault prediction workflow.

Figure 3: User-trained fault prediction example. Copyright Commonwealth of Australia (Geoscience


Australia).

1.2 Use the User-trained fault prediction


The user-trained prediction model is a supervised model, in which the training is
based on user-specified training labels generated by you for the specific
dataset of interest. The labeling workflow utilizes pre-existing Seismic
interpretation tools available in Petrel to pick the fault labels.
Note: you can find more about the Seismic interpretation tools in the Petrel help:
Geophysics/Seismic interpretation/Fault interpretation.
1 On the Seismic Interpretation tab, in the Assisted
interpretation group, select User-trained fault prediction.
The Assisted seismic interpretation dialog box opens.
2 Insert a seismic cube into the Assisted seismic interpretation dialog
box.
3 Insert a Training label.
Labels are faults that are interpreted on inlines or crosslines and have a
fault interpretation type (you can check it in the Settings of the fault, in the
Info tab). Training labels are used by the neural network to learn from.
2
4 Optional: Select the Create inline and crossline fault prediction
cubes check box if you want to save intermediate results.
Note: If Create inline and crossline fault prediction cubes is selected,
the prediction creates the following outputs:
• Original inline fault prediction cubes.
• Original crossline fault prediction cubes.
• Final prediction cube.
The fault prediction result is the combination of inline and crossline
prediction cubes. A separate neural network algorithm is used to merge
these cubes. For further analysis, you can have the original inline and
crossline fault prediction cubes loaded in Petrel. These outputs are not
created if the check box is cleared.
5 Select Run to start the training and prediction.

Figure 4: The interface of User-trained fault prediction.

6 When a fault prediction session is running, the user can find the task
in Task Manager, click on to get updated Message log.
• If the user has not changed the seismic storage default directory
from System settings-Seismic settings-Seismic files, the result
fault cubes are stored in the Petrel project directory: / project
name.ml/sessions, and will by imported to Petrel after finished
successfully.
• If the user has changed the seismic storage default directory from
System settings-Seismic settings-Seismic files, the result fault
cubes are stored in the assigned directory, and will by imported to
Petrel after finished successfully.
Note: In case Petrel was closed unexpectedly, Python may run in the
backend and generate results to the directory.

Figure 5: The task Manager showing a task running.

3
1.3 Labeling strategy for user-guided training
Ensure you generate high-quality labels for training with sufficient detail and
consistency to act as an effective training input. Poor quality or inaccurate fault
labels can lead to a low (or poor) performance of the prediction model.
Consider the following when performing a labeling task over a conventional
fault interpretation practice in the manual fault interpretation workflow:
• Identify the focus of the interpretation. This might be a particular
depth interval, one or more specific fault blocks, or regions of the
cube that are dominated by a particular structural style.
• Identify which reflection events are strongest and most pertinent
across the area of interest. This is important, because they form
the basis of identifying the fault offset.
• Also consider the coverage of these events, such as, where the
fault indicator is at depth.
• Use the identified horizon events to identify fault offsets with
confidence.
• Identify indicators of faulting, which might be in the form of offset
of strong amplitude reflection events, variations in seismic
character from the up thrown or down thrown sides of the fault to
the other, areas of low amplitude, or where there is a reduction in
areas of otherwise competent reflectivity.
• Identify regions where the fault cannot go, such as well-imaged
fault blocks with high amplitude, stable reflectivity that show no
indicators indicative of faulting. Using previously identified fault
indicators, define the trend of the fault.
• Identify fault indicators along the fault trend and the upper and
lower vertical limits of the fault trend.
• Identify picking nodes based on trend geometry changes and
high confidence fault indicators.
• Pick the fault from top to bottom integrating the identified picking
nodes that capture the changes in the fault geometry and pass
through the highest confidence region.

4
Figure 6: Fault labelling examples and guidance, note the level of accuracy required to capture an
effective training label. Copyright Commonwealth of Australia (Geoscience Australia).

1 Abrupt termination of reflection events.


2 Subtle inflection of horizons through fault zone.
3 Define upper limit of fault.
4 Ensure to consistently label faults, especially at depth through lower
signal to noise regions.
5 Picking 'nodes' used to identify changes in geometry should be used
as a basis for picking, ensuring the fault passes through the highest
confidence areas.
Even when the convention to interpret faults is followed, effective labeling
requires a different mindset to that associated with conventional fault
interpretation. Before you run a user-trained prediction, consider the following
points when checking any labels:
• Label the distribution of inlines and crosslines. In general, select
the lines that show the faults present, but also sample the
variability in data and structural imaging quality to provide a variety
of training data representative of the entire dataset. Selecting
several lines in close proximity, with little variability, is not an
effective training input. You can pick up labels on inlines and
crosslines. Currently, random lines are not supported.
• Aim to label approximately 0.25% to 2% of input seismic data,
depending on the cube size.
• Select sections that are fully labeled in terms of the faults present
across the section. Missed faults can confuse the ability for the
machine to learn, because conflicting information is passed in
terms of training labels.

5
• Be consistent with the labels provided. If you are interested in only
identifying large scale faults, then do not label the smaller scale or
polygonal faults.
• Avoid ambiguous labels. If you are in doubt about a presence of a
particular label, leave it out, but make sure you are consistent with
choosing particular labels. You can always add additional training
labels in subsequent predictions to improve the result.
Consider cropping volumes of data to the region of interest. However, seismic
data must meet the minimum requirement of at least 266x266 samples for any
labeled intersection to run ML.

1.4 Validate the fault prediction output


You can use the Flip/Roll Mixer tool to validate the output fault prediction cube.
It enables you to compare and mix multiple input datasets within a single mixer
cube. It also enables the smooth adjustment of the opaque and transparent rate
between different datasets by using Blend using color table opacity and Blend
weight at the same time.
1 Open the Settings for the fault prediction cube result and set up the
opacity in the Opacity tab to keep only the faults highlighted.
2 On the Seismic Interpretation tab, in the Attributes group,
select Mixer, then select the Flip/Roll Mixer. This creates a mixer
object in the Input pane.
3 In the Background box, insert a seismic cube which has been
used as an input for ML fault prediction from the Input pane.
4 In the Foreground box, insert a fault prediction cube, the result of
the ML fault prediction from the Input pane.
5 In the Mixer dialog box, select Blend using color table opacity.

Figure 7: Mixer Flip/Roll dialog box.

6 Display the seismic section of the created Mixer cube.


7 Press M to move through the seismic to analyze prediction results.

6
2. Fault Extraction
2.1 Fault extraction process
The Machine learning assisted seismic interpretation workflow has a point cloud
approach, where ML based fault prediction cubes undergo a 3D geometrical
analysis, that enables for the extraction of faults as segmented single objects
that preserve the input resolution and fault plane geometry.
Run the Fault extraction process, which is available on the Seismic
Interpretation tab, in the Assisted interpretation group, with fault prediction
cube as input to extract faults. When fault point sets are extracted, you can
display them all at once or start looking into specific fault point sets one by one.
You can use the Fault point set editing Tool Palette to merge, split, show or
hide faults.
To get the result faster, the input fault point sets can be subsampled in the Fault
point sets subsampling process, which is available on the Seismic
Interpretation tab, in the Assisted interpretation group, under Fault extraction
tools.
Edited faults point sets are used as input for the Fault Framework process.

Figure 8: Edited faults point sets are used as input for the Fault Framework process.

Note: The Fault extraction license needs to be selected first, before Petrel is
open. If you select File/License module and select the license after, it will not
activate the feature.

2.2 Perform Fault extraction based on fault prediction result


You can use the Fault extraction process as input to support the user-trained
fault prediction output, or any fault cube where low values describe the
7
background and high values represent faults. The algorithm and the default
parameters are optimized for the fault prediction output of machine learning.

Figure 9: Fault extraction dialog box.

1 On the Seismic Interpretation tab, in the Assisted


interpretation group, select Fault extraction.
2 In the Fault cube box, in the Fault extraction dialog box, insert a
8
fault prediction cube.
3 If a Petrel project contains an active seismic cube, the Fault
extraction dialog box opens with this active seismic cube selected
as the input.
4 Select Extract to extract fault point sets.
You can also analyze an input fault prediction cube together with planarity and
azimuth attributes, change the extraction parameters, and run the extraction.
1 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction.
2 In the Fault cube box, on the Fault extraction dialog box, insert a
fault prediction cube.
3 If a Petrel project contains an active seismic cube, the dialog box
opens with this active seismic cube selected as the input.
4 Select Evaluate to create Planarity and Azimuth cubes which you
can analyze before performing the fault extraction.
5 Display the fault prediction cube together with planarity and azimuth
attributes to analyze it.
6 Re-open the Fault extraction dialog box and make sure that the
Planarity and Azimuth cubes are selected to extract fault point sets.
7 Under Extraction parameters, enter the extraction parameters
based on the analysis.
8 Select Extract to extract fault point sets.
By default, extracted faults are listed in the folder by size, from bigger to smaller
faults.
Note: The Fault extraction process is asynchronous, and Petrel is only locked
while newly created fault point sets are added into the project.

Evaluate
Under Evaluate, you can create planarity and azimuth cubes from a fault
prediction cube and analyze them.

Fault cube box


A fault cube is the fault prediction cube output from machine learning.
The minimum threshold parameter defines a fault value. Every amplitude value
above the specified value is considered a fault, and everything below is
considered background.
Radius defines a search area for the geometric fault analysis. The specified
value must be higher than the fault width (number of voxels) in a fault prediction
cube.

9
Planarity cube box
A planarity cube is used to extract and split faults at the intersections. It
highlights how planar a fault region is with a value range from 0 (no flatness
found in the search radius) to 1 (a completely flat plane in the search radius). To
have a better understanding of the values, under Interpolation method,
select None in the Settings dialog box for planarity and azimuth cubes, in
the Style tab.

Figure 10: Planarity cube: Example of fault point sets split at the intersection between yellow, green
and light blue patch with the Planarity value set to 0.55, while the light blue and violet patches are split
because of fast azimuth changes (compare next Azimuth figure).

Azimuth cube box


The azimuth cube is used to guide the extraction of faults. In conjunction with
the azimuth sectoring, it facilitates the extraction of faults with a consistent
azimuth. This means that the azimuth of an extracted fault only changes slowly
and, therefore, provides for a geologically consistent fault.
The azimuth is calculated according to the geologic definition of the strike and
defined clockwise from North in 360 degrees. The algorithm handles left- and
right-handed coordinate systems.
It follows the convention of the 'right hand rule' in geology. Therefore, the
azimuth is always defined with the dipping of the plane to the right when looking
in the strike direction. For example, a fault with a strike/azimuth towards north
(0°) will have a dip direction towards east (90°), and a fault with a strike south
(180°) will have a dip direction towards west (270°). For a visual explanation,
see the figure below.
To visualize the values of the azimuth cube, open the settings of the azimuth
cube and select the Interpolation method None inside the Style tab.

10
Note: If faults are very steep, close to 90deg dip, the azimuth can sometimes
switch -180 degrees or +180 degrees in some areas due to local dip (dip,
calculated in very small radius). This causes tiny holes in the extracted fault
point sets.

Figure 11: Angle definition.

Figure 12: Azimuth cube: Example of fault point sets split between light blue and violet patch
because of fast changes in azimuth value in alignment with the sectoring and merging optimization,
while the three other patches are split by planarity (compare with the previous Planarity figure).

Extraction parameters
You can use Azimuth and Fault parameters to have an impact on the output of
the Fault extraction process.

11
Azimuth range
Azimuth parameters are used to extract faults within the specified azimuth
sector range. You can define the parameter to extract all faults or only the faults
within a specific azimuth range by editing the sector start and sector end
values. The sector input is the true geographic north and calculated clockwise.
Select the Symmetrical check box to enable the additional extraction of faults
with opposing azimuth (that is, the opposing dip direction) compared to the
azimuth range specified by setting the sector start and end fields. When the
specified sector range is more than 180 degrees and the Symmetrical check
box is selected, all the faults are extracted.
Note: If faults are very steep, close to 90deg dip, the azimuth can sometimes
switch -180 degrees or +180 degrees in some areas due to local dip (dip,
calculated in very small radius). This causes tiny holes in the extracted fault
point sets.
Fault definition
Fault parameters define the final faults output.
The Planarity threshold is used to extract faults and split them at intersections.
The value range is [0; 1]:
• 0 means there is no planarity within the radius specified
under Evaluate.
• 1 means that fault regions are flat. Values above the specified
planarity threshold are used to extract and split faults at
intersections. Therefore, if the planarity threshold value is too low,
faults might not be correctly separated at intersections.
Other criteria for merging are based on the automatic analysis if a merge might
introduce branching effects in 3D, how rapidly the azimuth is changing, and
how well the patches fit into the fault. In addition, when merging, Fault extraction
performs a global optimization that aims to create faults that are consistent and
well-integrated to improve the result. Because of this, in most cases, fault
extraction subdivides a fault into geologically consistent patches, that either
stop at intersections or if the azimuth is changing too fast. The main goal of the
algorithm is to produce faults that are geologically consistent and not
necessarily huge faults that geologically might not belong together.
If the output of the extraction does not look geologically correct, you can then
merge or split manually. The Fault min. size removes extracted faults which
have fewer points than the specified value.

Advanced parameters
The provided extraction and advanced parameters are optimized. However,
you can use advanced parameters to address some special cases and
successfully extract complicated faults.
You can use the Sector size box to subdivide the defined sector range into
azimuth sectors. Each of these subdivided sectors can have a sector
12
overlap. Sector overlap must be less than half of the Sector size. First fault
patches are extracted inside these subsectors. In case a sector overlap is
specified, patches between the adjacent sectors are then automatically
merged if among other conditions the overlap in space exceeds the value of
the minimum overlap parameter. In general, you must not change the default
10° sector size for best quality. A bigger sector size than the default improves
the performance, but can degrade the output producing faults with a rapidly
changing azimuth, while a smaller sector size can split up faults in small patches
because of the resolution limit of the azimuth cube. The Patch min.
size parameter removes the extracted faults that are smaller than the specified
value within each defined sector.
The Min. overlap parameter defines the minimum overlap between two faults,
which are from neighboring sectors and represent the same fault. Only faults
that have a bigger overlap (in %) than the defined parameter are considered
mergeable by the algorithm.
Note: Not all the patches that fit the overlap criteria are merged. However, no
patch is merged that overlaps less than the specified minimum overlap.

Post-processing
During the Fault extraction process, you can optionally choose to subsample
fault point sets, create dip and azimuth attributes, or do both.
Subsample fault point sets
By default, this option is not selected. Select the check box and change the
parameters if required:
• Azimuthal sampling defines the sampling bin size in the strike
direction.
• Vertical sampling defines the sampling bin size in the depth
direction.
In the perpendicular direction, the fault is reduced to a width of one point.
Note: The default vertical sampling parameter is always defined in depth units.
The specified average velocity is used to convert these units into time to
subsample the fault point sets extracted in the time domain. If the specified
sampling values for azimuthal and vertical sampling are smaller than the original
sampling values of the input, then the output remains almost identical.
Subsampling is not a simple decimation. It preserves the main features of the
fault, but in a lower resolution.
Per sampling interval one new point is generated. This newly generated point is
freely and optimally placed inside the sample interval to preserve the main
feature of the fault. This is done with a weighted gaussian fitting, that not only
takes points inside the current sample interval into account, but also inside the
adjacent intervals. The original points closer to the center of the current
sampling interval have a higher influence on the placement of the newly
generated point than points outside of the interval. The broad influence area

13
ensures that the overall fault feature is preserved, even in areas with few points.
Edges of the fault are separately processed and preserved by focusing the
gaussian influence on the edge of the original fault. Therefore, the shape and
the limits of the fault is preserved as well.
Note: Subsampling is mainly an asynchronous process and locks Petrel only
when it adds subsampled faults into the Input pane.
Create dip and azimuth attributes for fault point sets
By default, the option is selected. You can change the parameters if required:
• Horizontal radius defines the horizontal radius used to calculate
dip and azimuth.
• Vertical radius defines the vertical radius used to calculate the
dip.
• Average velocity is used only for time domain objects. To
calculate dip for extracted fault point sets in time domain, the TWT
values are first internally converted to depth using the specified
average velocity.
Note: To get reliable dip and azimuth attributes, the horizontal and vertical
radius values must be at least twice the size of the sampling interval.
Azimuth in Dip are derived from the normal vector that is locally calculated in
the neighborhood defined by the radius parameters. In that area, a plane is fit to
the fault points using a Principal Component Analysis. The normal vector of that
plane describes the dip and azimuth.
Compared to the Azimuth calculation in the evaluation step, the post-
processing step gives a more precise estimation. Each fault in the post-
processing step is separately processed and, depending on the extend of the
radius, a geologically more meaningful dip and azimuth can be derived.
The azimuth is calculated according to the geologic definition of the strike and
defined clockwise from North in 360 degrees. The algorithm handles left- and
right-handed coordinate systems.
It follows the convention of the 'right hand rule' in geology. Therefore, the
azimuth is always defined with the dipping of the plane to the right when looking
in the strike direction. For example, a fault with a strike/azimuth towards north
(0°) will have a dip direction towards east (90°), and a fault with a strike south
(180°) will have a dip direction towards west (270°).
Note: Dip and azimuth calculation is mainly an asynchronous process and
locks Petrel only when it adds calculated attributes into the Input pane.

Fault extraction tools


There are two fault extraction tools. The Fault point sets subsampling process
subsamples extracted fault point sets and the Fault point sets dip and azimuth
calculation process creates dip and azimuth attributes after the fault point sets
have been extracted and, for example, subsampled.
14
Fault point sets subsampling
You can subsample fault point sets after the Fault extraction process has
finished and fault point sets have been extracted.
Subsampling is not a simple decimation. It preserves the main features of the
fault, but in a lower resolution.
Per sampling interval one new point is generated. This newly generated point is
freely and optimally placed inside the sample interval to preserve the main
feature of the fault. This is done with a weighted gaussian fitting, that not only
takes points inside the current sample interval into account, but also inside the
adjacent intervals. The original points closer to the center of the current
sampling interval have a higher influence on the placement of the newly
generated point than points outside of the interval. The broad influence area
ensures that the overall fault feature is preserved, even in areas with few points.
The edges of the fault are separately processed and preserved by focusing the
gaussian influence on the edge of the original fault. Therefore, the shape and
the limits of the fault is preserved as well.
1 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction tools and from the
list select Fault point sets post processing.
2 Insert a folder with extracted fault point sets.
3 Select the Create subsampled fault point sets check box.
4 Set the parameters.
• Azimuthal sampling defines the sampling bin size in the strike
direction.
• Vertical sampling defines the sampling bin size in the depth
direction.
Note: The default vertical sampling parameter is always defined in
depth units. The specified average velocity is used to convert these
units into time to subsample the fault point sets extracted in the time
domain. In the perpendicular direction, the fault is reduced to a width
of one point. If the specified sampling values for azimuthal and
vertical sampling are smaller than the original sampling values of the
input, then the output remains almost identical.
5 Select Run.
A new Subsampled Fault extraction folder is created in the Input pane.
Note: If the original fault point sets have dip and azimuth attributes, the
subsampled output does not inherit it. It is recommended to re-run the
Fault point sets dip and azimuth calculation process.
Note: Subsampling is mainly an asynchronous process and locks Petrel
only when it adds subsampled faults into the Input pane.
Fault point sets dip and azimuth calculation
15
You can calculate dip and azimuth attributes for fault point sets after the Fault
extraction process has finished and fault point sets have been extracted.
Azimuth in Dip are derived from the normal vector that is locally calculated in
the neighborhood defined by the radius parameters. In that area, a plane is fit to
the fault points using a Principal Component Analysis. The normal vector of that
plane describes the dip and azimuth.
Compared to the Azimuth calculation in the evaluation step, the post-
processing step gives a more precise estimation. Each fault in the post-
processing step is separately processed and, depending on the extend of the
radius, a geologically more meaningful dip and azimuth can be derived.
The azimuth is calculated according to the geologic definition of the strike and
defined clockwise from North in 360 degrees. The algorithm handles left- and
right-handed coordinate systems.
It follows the convention of the 'right hand rule' in geology. Therefore, the
azimuth is always defined with the dipping of the plane to the right when looking
in the strike direction. For example, a fault with a strike/azimuth towards north
(0°) will have a dip direction towards east (90°), and a fault with a strike south
(180°) will have a dip direction towards west (270°).
1 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction tools and from the
list select Fault point sets post processing.
2 Insert a folder with extracted fault point sets.
3 Select the Create dip and azimuth attributes for fault point
sets check box.
4 Set the parameters.
• Horizontal radius defines the horizontal radius used to calculate
the dip and azimuth.
• Vertical radius defines the vertical radius used to calculate the
dip.
• Average velocity is used only for time domain objects. To
calculate dip for extracted fault point sets in time domain, the TWT
values are first internally converted to depth using the specified
average velocity.
5 Select Run.
New attributes are created and added to the point sets.
Note: To get reliable dip and azimuth attributes, the horizontal and vertical
radius values must be at least twice the size of the sampling interval.
Note: Dip and azimuth calculation is mainly an asynchronous process and
locks Petrel only when it adds calculated attributes into the Input pane.
Fault interpretation
Create fault interpretation converts fault pointsets to fault interpretations. The
fault sticks of each fault are located on a random line perpendicular to the
16
average fault plane. The fault stick distance and node distance are controlled
by the parameters.
Each fault is analyzed using a coordinate PCA to get dip and azimuth of the
mean fault plane. The fault is then rotated around the resulting azimuth and dip.
This makes it possible to sample the pointset perpendicular to the plane and
thus create a fault interpretation that is consistently on the same random plane
for each fault.
The rotated fault is analyzed in a local neighbourhood defined by the fault stick
distance. The points in this neighbourhood are locally fitted with a spline. If the
fit of the thin plate spline is good enough, the spline is sampled according to the
position of the fault stick and the node distance. The sampled points are rotated
back and will result in the fault stick. In case the spline fit is not good enough the
neighbourhood is iteratively expanded until a well-defined spline can be fitted
and then sampled as mentioned previously.
This process is done for each fault stick that needs to be fitted inside the fault.
1 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction tools and from the
list select Fault point sets post processing process .
2 Insert a folder with extracted fault point sets.
3 Select the Create fault interpretation check box.
4 Enter the parameters.
• Fault stick distance: The fault stick distance parameter defines
the horizontal distance between each stick.
• Node distance: The node distance parameter defines the
approximate vertical distance between each node in a stick.
• Average Velocity: The default vertical sampling parameter is
always defined in depth units. The specified average velocity is
used to convert these units into time to create fault interpretations
from fault point sets extracted in time domain.
5 Select Run.
You can find a new interpretation folder with created fault interpretation objects
in the Input pane.
Apply color as an attribute for all point sets
After the dip and azimuth are calculated, you can apply color as an attribute for
all the point sets in a folder at the same time.
Then, the color table specified for this attribute is applied. The names of the
attributes under each point set must be unique.
Note: The Apply color as attribute for all option is only available for point
sets located in a folder. If the extracted fault point sets with attributes are
located in the Input pane, this option is not available.
1 In a 3D window, display the fault point sets that you want to use
to display an attribute.
17
2 Check that the displayed point sets have dip and/or azimuth
attributes.
If not, use the Fault point sets dip and azimuth calculation process to
create the attributes.
3 In the Fault extraction folder, expand any point set.
4 Right-click an attribute and, in the context menu, select Apply
color as attribute for all.
Note: To reset the color option, open the Settings dialog box for any
point set in the folder and, on the Style tab, change
the Color to Specified.

2.3 Edit Fault extraction results


When fault point sets are extracted, you can display all of them at the same time
or start looking into specific fault point sets. You can use the Fault point set
editing Tool Palette to merge, split, show, or hide faults.

Select and merge fault point sets


You can use the Fault point set editing Tool Palette to merge faults.

Figure 13: Fault point set editing Tool Palette.

1 Open a 3D window.
2 Display extracted fault point sets.
3 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault point set editing. Or,
alternately, right-click the fault pointset in the 3D window and
select Fault point set editing from the Point set Mini toolbar.
Merge fault point sets is active by default.
The Fault point set editing Tool Palette opens.
4 In a 3D window, select several fault point sets to merge.
The selected point sets are highlighted.
Note: You can press Ctrl + Z to undo the selection. When you undo, this is
shared between the Merge and Split options and reverts changes in the
order the steps were applied. To apply the undo, make sure that either
the Merge option or Split option is activated on the Fault point sets
editing Tool Palette.

18
Figure 14 : Example of selected fault point sets to be merged.

5 Press Ctrl and select a selected point set one more time to clear
it.
6 When you have selected all the required fault point sets,
double-click to merge them.
The fault point sets targeted for merging are hidden from the 3D
window and are unchanged. You can find them in the original folder in
the Input pane, in the sub-folder Processed faults. The newly created
fault point sets are shown in the 3D window and located at the top of the
same Fault extraction folder.
Note: You can press Ctrl + Z to undo the merge operation. When you
undo, this is shared between the Merge and Split options and reverts the
changes in the order the steps were applied. To apply the undo, make
sure that either the Merge option or Split option is activated on the Fault
point sets editing Tool Palette.

19
Figure 15: Merged faults.

Split fault point sets into two separate faults


You can use the Fault point set editing Tool Palette to split faults.
1 In the Fault point set editing Tool Palette , select Split fault
point sets.
2 Select Tool settings to check the point set you need is active.
You can use Select in the Window toolbar to select the required
fault point set in the 3D window to activate it.

Figure 16: Fault point set editing Tool Palette with the active fault point set.

3 To define the split area, click around a point set to draw the polygon.
Note: You can press Ctrl + Z to reset the polygon, but the Split fault point
sets tool stays active. When you undo, this is shared between
the Merge and Split options and reverts the changes in the order the steps
were applied. To apply the undo, make sure that either the Merge option
or Split option is activated on the Fault point sets editing Tool Palette.

20
Figure 17: Example of splitting the selected fault.

4 To end the polygon, double-click to split a fault point set.


The fault point set targeted for splitting is hidden from the 3D window and
unchanged. You can find it in the original folder in the Input pane, in the sub-
folder Processed faults. The newly created fault point sets are shown in the 3D
window and located at the top of the same Fault extraction folder.
Note: You can press Ctrl + Z to undo the split operation. When you undo, this is
shared between the Merge and Split options and reverts the changes in the
order the steps were applied. To apply the undo, make sure that either
the Merge option or Split option is activated on the Fault point sets editing
Tool Palette.

21
Figure 18: A split fault.

Show or hide a fault point set


You can use the Fault point set editing Tool Palette to show or hide a fault
point set closest to a pick.
1 Display a fault prediction cube seismic section in a 3D window.
2 On the Fault point set editing Tool Palette , select Show/hide
fault closest to the pick.
3 Select Tool settings to check the extraction folder you need is
active.
If needed, select another folder by typing a folder name or insert a folder
into the Collection box.

Figure 19: Fault point set editing Tool Palette with an active fault folder.

4 Select a highlighted fault on the time slice.


The fault point set closest to the pick is shown in the 3D window. If the
selected folder has original faults and in addition a Processed
faults sub-folder, it displays the first fault point set found in the order
specified: first it looks in the original folder, and then in the Processed
faults sub-folder.
5 Select t he shown fault again.
22
The fault is hidden.
Note: You can hide all the faults at the same time when Show/hide faults
closest to the pick is active by double-clicking one of the displayed
faults.
Fault dip/azimuth filter
You can use the Fault dip/azimuth filter section to display the data by the dip
and azimuth attributes. Each fault is displayed in the polar coordinate system as
a pole point. The angle of the pole represents the azimuth of the fault (from 0 to
360 degrees), while the distance from the origin represents the dip from 0 to 90
degrees. The pole coordinate of each fault is retrieved by calculating the
angular mean of the selected dip and the selected azimuth attribute values.
You can select points within the defined dip and azimuth ranges to filter the
data .
1 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction tools and, in the
list, select Fault extraction filters.
2 In the Fault point sets box, insert a folder that has extracted
fault point sets.
If a Petrel project contains an active folder with point sets, the Fault point
sets extraction filters dialog box opens with the active folder selected as
the input.
Note: To make the attributes available in the dip and azimuth lists, dip
attributes must have the dip angle template and azimuth attributes must
have the dip azimuth template. Also, all the point sets in the provided
folder must have dip and azimuth attributes, and these attributes must
have unique names.
3 Expand the Fault dip/azimuth filter section.
You can use the options in the Fault dip/azimuth filter section to select
and clear points in the filter.
• Select Select all to select all the points in the filter.
• Select Clear all to clear all the selected points in the filter.
• Select Invert selection to invert the current selection to the
opposite state.
• Select Undo to revert to the previous selection state.
• Select Redo to revert to the selection state before you
selected Undo.
4 When the points are displayed in the filter, use the mouse to
select specific points.
Note: You can use shortcuts to select points.
• Use Shift and click to add additional selection in the filter and
display additional fault point sets in the active window.

23
• Use Ctrl and click to clear points in the filter and hide some of the
fault point sets displayed in the active window.
• Select Ctrl + Shift and click to apply a mirroring selection to show
fault point sets in the active window.
5 Select the fault point sets in the filter to show them in the active
window.
If you select and clear fault point sets in the Input pane, this is reflected in
the filter and the active window.
Note: If you select or clear fault point sets in the Input pane or hide
displayed fault point sets in an active window, this is reflected in the filter
selection. If you have several active windows and if in each window you
have applied a separate selection in the filter, the filter preserves these
selections for each window. If you close the filter and apply changes in an
active window, then reopen the filter, the filter is updated accordingly to
the current display in an active window.
The active fault is highlighted in green color in the filter.
6 Optional: When you have selected the points, move the data to
another folder or delete it. To do this, right-click the selected
fault point sets and select an option.
• Select Create a new subfolder and move selected fault
point sets to create a subfolder and move the selection into this
subfolder. You can enter a name for the subfolder in the dialog
box .
• Select Move selected fault point sets to to select an existing
subfolder under the currently used main folder and move the
selected fault point sets into this subfolder.
Note: Selected points are removed from the filter and hidden in
the active window when they are moved to a subfolder. If you want
to display these moved points in the filter, insert the subfolder as
input into the filter.
• Select Delete selected fault point sets to remove highlighted
points from the Fault dip/azimuth filter and permanently delete
these selected fault point sets from the project.
Note: Alternatively, select the Delete key to delete a selection.
Filter by size
You can use the Filter by size section to display the data by using the number
of points (size) per fault point set.
Note: The filters only show the fault point sets from the folder you have inserted,
but not its subfolders.
7 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction tools and, in the
list, select Fault extraction filters.

24
8 In the Fault point sets box, insert a folder that has extracted
fault point sets.
If a project contains an active folder with point sets, the Assisted
interpretation dialog box opens with the active folder selected as the
input.
9 Expand the Filter by size section.
Under the graph, the box on the leftmost side shows the minimum number of
points in the fault point sets from the input folder, and the box on the rightmost
side shows the maximum number of points.
You can use the options in the Filter by size section to select and clear points
in the filter.
• Select Select all in the filter to select all the points in the filter to
display them in an active window.
• Select Clear all selection in the filter to clear all the selected
points in the filter to remove it from the active window.
• Select Invert the current selection to invert the current
selection in the filter.
• Select Toggle log scale on the histogram to use the
logarithmic scale to display the data in the filter. Each bin shows
the [log10(number of fault point sets) + 0.1]. If there is a bin with no
fault point sets, then that bin is set to 0.
• In the Change number of bins box, enter a number to reduce or
increase the number of bins (the minimum is 1 and the maximum is
400) to display the data in the filter. Each bin has a size = (max
number of points - min number of points + 1) / number of bins
(rounded up).
10 To select the size, move the sliders.
The numbers are updated based on the minimum and maximum of the
currently selected data.
When the pointer is moved over the bins, tooltips are displayed that show
the Number of faults, the Bin range, and the Frequency.
• Number of faults: The number of faults in the highlighted bin.
• Bin range: The range between the minimum and maximum
number of point sets (fault size) in the fault point sets in one bin.
• Frequency: The frequency of occurrence (probability) that the
size of a fault point set falls into this bin.
11 Select Apply to show the changes in the active window and
the Fault dip/azimuth filter section.
Note: Apply is unavailable after the changes are applied in the active
window. It is available when a new selection is made in the Filter by
size section.

25
If you select or clear fault point sets in the Input pane or hide displayed
fault points set in an active window while the filter is open, a message
appears in the filter to notify you that the current selection is not
synchronized with the displayed faults in the active window and the
content of the active window will not be reflected in the selection state of
the filter. If you close the filter and apply changes in an active window, then
reopen the filter, the filter is not updated to the current display in an active
window and a message appears to notify you that the filter and window
states are not synchronized.
If you have several active windows and if in each window you applied a
different selection in the filter, the filter preserves these selections.
In both cases, Apply is available again and you can use it to apply the
current filter selection in the active window.
12 Optional: Select the Interactive check box to apply the
changes made in the Filter by size section when one of the
sliders is released.
If the Interactive check box is selected, the Apply button is unavailable.
13 Optional: Move your selections to another folder or delete it. To
do this, right-click the selected fault point sets and select an
option.
• Select Create a new subfolder and move selected fault
point sets to create a subfolder and move the selection into this
subfolder. You can enter a name for the subfolder in the Create a
new subfolder and move selected fault point sets dialog
box.
• Select Move selected fault point sets to to select an existing
subfolder under the currently used main folder and move the
selected fault point sets into this subfolder.
Note: Selected points are removed from the filter and hidden in an
active window when these selected faults point sets are moved to
a subfolder. If you want to display these moved points in the filter,
insert the subfolder as input to the filter.
• Select Delete selected fault point sets to remove highlighted
points from the Fault dip/azimuth filter and permanently delete
these selected fault point sets from the project.
Note: Alternatively, select the Delete key to delete a selection.

26
3. ML Horizon Prediction
Machine learning based horizon prediction
You can use Machine learning (ML) based horizon prediction to predict
horizons within 3D seismic data volumes. This process is run by Petrel and does
not use the external resources.

Traditional waveform trackers are cross-correlation based and can track only
one waveform at the same time. In addition, it can be over complicated, to use
them, with the parameters. The ML based horizon prediction algorithm can
track many waveforms at the same time. It is more powerful in capturing the
specific wave form around a reflector. It avoids cycle skipping compared to the
traditional trackers. With the minimum parameters it can deliver robust outputs.

Figure 20: Overview of the Horizon prediction workflow. Depends on the model quality value, several
iterations might be applied by the process to meet the value. When it is finished, to continue ML
based horizon prediction, reduce the model quality value, or add more labels to the same horizon
interpretation to give additional information to the algorithm.

27
NN Horizon prediction starts with picking a single or several points (labels) of a
targeted horizon.

When you select a label, interpreted as a point or several points along a seismic
section or several seismic sections, the method automatically distributes up to
1000 points randomly along this label. For each of these randomly picked
points, it not only uses a pixel itself, but it takes 51 samples as positive examples
(25 above the picked label, and 25 below the point). For each of these
randomly picked points, it automatically extracts the data in the background
class, called negative examples.

Figure 21: Training data generation randomly picked points with positive and negative examples
defined.

These positive and negative examples follow the event on the central point on
the event itself. For each of these randomly picked points, it automatically
extracts the data in the background class (shown in red). For each point, it
randomly extracts 6 background examples along the trace. It will provide sets
of positive and negative examples with a ratio of 1 to 6, which also increases
the total number of training data. It describes the information of what is in and
what is not in the horizon being tracked. So, when the training data has been
defined, it can then feed them into the model.

28
Figure 22: Training data generation positive and negative examples.

You can use Radial Basis Functions to predict a horizon confidence measure.
This method is a type of instance-based learning, where K's nearest neighbors
is probably the most well-known method. It relies on the pattern recognition in
the same trace as labels. It expands a horizon by evaluating the neighboring
values in the vertical direction and by calculating the confidence score values
for further expansion. The algorithm continues expanding a horizon while it
meets the given confidence value, which is the model quality parameter.
Running out of tracked points that match the specified criteria value stops the
tracking.

Radial Basis Functions is selecting class centroids based on the training data,
but it might have more than one centroid per class. Radial Basis Functions can,
therefore, represent high dimensional non-linear manifolds. The RBF classifier
uses many examples of seismic profiles to cover the background class and the
horizon class. By using a weighted sum of class densities for each class and
comparing the relative class density, a confidence measure for the horizon
class is produced. The example seismic profiles are produced by clustering
examples of profiles within each class and taking the cluster centroids with
corresponding weights as RBF centroids. The density function used is an
inverse quadratic of the centroid distance and is accumulated according to
centroid weights.

29
When the label has enough training data (more than 100), it runs the neural
network to produce a more robust model. Instead of learning patterns, as with
radial basis functions, the class information is captured in the neural net
weights. The neural network produces an output value in the range [0, 1], which
can be seen as the neural net confidence score. The algorithm feeds the picked
training data into one of the models for the training and then for the prediction.
The process is iterative, and it stops when it hits the model quality value that
minimizes the risk of following the wrong event while tracking. It can be used as
an indicator when tracking must be continued or stopped. You can increase the
model quality value from the default setting to track the horizon with higher
confidence values.

Note: Even if the number of input points is larger than 100, the RBF method still
runs, and it can be used for QC.

Note: Within each iteration, the process picks 1000 new training data in the
interpreted horizon and runs the training and prediction again. When new
labels are added, new labels are only considered by 50% at the very first
RBF iteration. When there are subsequent iterations, the training data is
selected from everywhere evenly.

The result of the NN horizon prediction process is a horizon interpretation and


confidence score cubes, which can be found in the Input pane. If the model
quality parameter is set to high, the horizon prediction can stop earlier to keep
the quality of a horizon. To continue tracking this horizon, you can add more
points by interpreting it on a seismic data, or you can reduce the model quality
parameter.

You can access the Horizon prediction in the Assisted interpretation group on
the Seismic Interpretation tab.

Note: The Horizon extraction license needs to be selected first, before Petrel is
open. If you select File/License module and select the license after, it will not
activate the feature.

30
Figure 23: The Assisted seismic interpretation dialog box for Horizon prediction.

Use Horizon prediction


Horizon prediction model is the RBF network model, in which the training is
based on user-defined horizon interpretation labels describing a targeted
horizon. You can use this model to get the expected results with minimum user
interaction. The labeling workflow utilizes pre-existing interpretation tools
available in Petrel to pick the horizon labels.
14 On the Seismic Interpretation tab, in the Assisted interpretation group,
select NN Horizon prediction.
The Assisted seismic interpretation dialog box opens.
15 Insert a seismic cube into the Assisted seismic interpretation dialog box.
If a Petrel project contains an active seismic cube, the Assisted seismic
interpretation dialog box opens with the active seismic cube selected as input.

31
16 Insert a seismic horizon.
If a Petrel project contains an active seismic horizon, the Assisted seismic
interpretation dialog box opens with the active seismic horizon selected as
input.

Seismic horizon represents a single or several points (labels) of a targeted


horizon. Depends on an event complexity, for instance faulted area, it can help
to pick labels on a few lines to describe it better. A reflector must be persistent
(positive or negative) to get the consistent output. It must be a similar signal to
track it.
17 In the Advanced tab, enter the parameters.
Model quality of a trained prediction model on a provided training data can be
used to minimize the risk of following the wrong event while tracking. It can be
used as an indicator when tracking should be continued or stopped. Increasing
the model quality value from the default setting tracks horizon with higher
confidence values.

Note: Depends on the data quality, seismic signal feature complexity and
specified model quality value, it can take several iterations of training and
prediction to get the final seismic horizon.

You can use a Fault cube (a cube which describes fault discontinuities, for
example a fault probability attribute cube) to stop horizon interpretation at
visible faults while tracking. Fault threshold defines a fault value. Every value
above the specified value is considered a fault, and everything below is
considered background.

You can use the Outputs options to create confidence score cubes for each
iteration to QC the result. The confidence score cube is the output of the model
given the seismic as input. Based on the given labels, a model is trained to get
high confidence at the picked reflector and low(er) confidence elsewhere.
While tracking a seismic event, the confidence is calculated on the fly to
evaluate if tracking should continue or not (confidence above 0.5 at reflector).

Create final confidence cube option creates the final RBF and NN cubes for the
last iteration. Create all confidence cubes option creates RBF and NN cubes for
each iteration. None option does not create confidence score cubes.

Note: Created RBF and NN cubes are always virtual. You can realize it if
needed. The realization process can take some time.

The RBF virtual cube represents the horizon confidence given the trained RBF
32
model, while the NN virtual cube represents the horizon confidence given the
trained NN model. The reason for having two different models is that RBF
models require little training data to give a robust model and can also be
modified on the fly while tracking to enforce trace-to-trace consistency.
An NN model requires more training data, e.g., more traces to produce a robust
model but is a more accurate model given enough training data. While tracking
both models are used: the NN model to predict horizon confidence regionally,
and the RBF model to predict trace-to-trace confidence locally. The RBF model
is modified on the fly to generate a local model for trace-to-trace confidence
prediction. The RBF virtual cube represents the non-local RBF model before
modification to a local model, in other words a regional RBF model. The NN
model is used as a generic regional model for the whole seismic cube, while the
RBF model is primarily used to enforce local consistency.

Note: The RBF model is not a neural network model, and that this is the reason
that it is well suited for on-the-fly transformation from a regional to a local
model.
18 Select Track to start the training and prediction.
Note: You can run the horizon prediction for different seismic horizon at the
same time. When Track is applied, use a different input to start another
horizon prediction. The process is asynchronous, and Petrel is only locked
while a horizon interpretation is updated in the Input pane.

The Undo option is available in the Horizon prediction dialog box and can be
used for horizon prediction steps only. The Undo option gets activated and can
be applied at least after one iteration run. Each undo click brings an output one
step back (one iteration back). Undo option stays available after Petrel is saved
and reopened and reflects different horizon prediction outputs.

Horizon prediction attributes and model stored with seismic


horizon
Each seismic horizon that is tracked with the ML based horizon prediction has
several attributes stored under this horizon.

Z or TWT attribute
The Z or TWT attribute describes the elevation of each point in a tracked
horizon, based on the domain of the input seismic data and in the project units.

Amplitude attribute
The Amplitude attribute describes the amplitude value of the input seismic data
in each point of a tracked horizon.

Track Scores attribute

33
The Track Scores attribute describes the confidence score of each point in the
extracted horizon. This is the confidence value that is produced while tracking
by the prediction model at the position of the horizon. The same model is used
to calculate values for the confidence cube. Values might be different between
this attribute and the corresponding values extracted from the confidence
cube, because the values from the cube are calculated at sample positions and
subsequently interpolated at the horizon position.

MLTracker iteration attribute


The MLTracker iteration attribute describes which points were tracked in which
iteration.
Note: On the Colors tab, in the Make discrete interval in the color table
parameter, enter 1 and make sure you set up the maximum and minimum
values from the data. You can display the Color legend in the active window to
read the iteration attribute.

Horizon prediction model


The Horizon prediction model is stored under each seismic horizon tracked with
the ML based horizon prediction. This model is updated with each iteration to
avoid creating a new model each time. It enables you to spend less time for
training within an iteration without losing its quality.
Note: Horizon prediction models are not supported by RPT or by exporting a
seismic horizon. It can be only stored and reused by the original horizon in a
project this seismic horizon has been tracked with the Horizon prediction. The
horizon prediction models object stores the RBF and NN models used to track a
horizon. Repeated tracking reuses and updates these models instead of making
new models. If they are defined, this saves time when you create and train
models for tracking.

34
*Mark of SLB.

Copyright © 2023 SLB. All rights reserved

You might also like