MachineLearningAssistedSeismicInterpretation_UserGuide
MachineLearningAssistedSeismicInterpretation_UserGuide
User Guide
Version 2023.3.1.0
Copyright Notice
Copyright © 2023 SLB. All rights reserved.
This work contains the confidential and proprietary trade secrets of SLB and may not
be copied or stored in an information retrieval system, transferred, used, distributed,
translated or retransmitted in any form or by any means, electronic or mechanical, in
whole or in part, without the express written permission of the copyright owner.
2. Fault Extraction............................................................................................................................................................................... 7
2.1 Fault extraction process...................................................................................................................................................... 7
2.2 Perform Fault extraction based on fault prediction result ....................................................................................... 7
Evaluate ...................................................................................................................................................................................... 9
Extraction parameters ........................................................................................................................................................ 11
Advanced parameters ....................................................................................................................................................... 12
Post-processing................................................................................................................................................................... 13
Fault extraction tools .......................................................................................................................................................... 14
2.3 Edit Fault extraction results ............................................................................................................................................. 18
1
Figure 2: User-trained fault prediction workflow.
6 When a fault prediction session is running, the user can find the task
in Task Manager, click on to get updated Message log.
• If the user has not changed the seismic storage default directory
from System settings-Seismic settings-Seismic files, the result
fault cubes are stored in the Petrel project directory: / project
name.ml/sessions, and will by imported to Petrel after finished
successfully.
• If the user has changed the seismic storage default directory from
System settings-Seismic settings-Seismic files, the result fault
cubes are stored in the assigned directory, and will by imported to
Petrel after finished successfully.
Note: In case Petrel was closed unexpectedly, Python may run in the
backend and generate results to the directory.
3
1.3 Labeling strategy for user-guided training
Ensure you generate high-quality labels for training with sufficient detail and
consistency to act as an effective training input. Poor quality or inaccurate fault
labels can lead to a low (or poor) performance of the prediction model.
Consider the following when performing a labeling task over a conventional
fault interpretation practice in the manual fault interpretation workflow:
• Identify the focus of the interpretation. This might be a particular
depth interval, one or more specific fault blocks, or regions of the
cube that are dominated by a particular structural style.
• Identify which reflection events are strongest and most pertinent
across the area of interest. This is important, because they form
the basis of identifying the fault offset.
• Also consider the coverage of these events, such as, where the
fault indicator is at depth.
• Use the identified horizon events to identify fault offsets with
confidence.
• Identify indicators of faulting, which might be in the form of offset
of strong amplitude reflection events, variations in seismic
character from the up thrown or down thrown sides of the fault to
the other, areas of low amplitude, or where there is a reduction in
areas of otherwise competent reflectivity.
• Identify regions where the fault cannot go, such as well-imaged
fault blocks with high amplitude, stable reflectivity that show no
indicators indicative of faulting. Using previously identified fault
indicators, define the trend of the fault.
• Identify fault indicators along the fault trend and the upper and
lower vertical limits of the fault trend.
• Identify picking nodes based on trend geometry changes and
high confidence fault indicators.
• Pick the fault from top to bottom integrating the identified picking
nodes that capture the changes in the fault geometry and pass
through the highest confidence region.
4
Figure 6: Fault labelling examples and guidance, note the level of accuracy required to capture an
effective training label. Copyright Commonwealth of Australia (Geoscience Australia).
5
• Be consistent with the labels provided. If you are interested in only
identifying large scale faults, then do not label the smaller scale or
polygonal faults.
• Avoid ambiguous labels. If you are in doubt about a presence of a
particular label, leave it out, but make sure you are consistent with
choosing particular labels. You can always add additional training
labels in subsequent predictions to improve the result.
Consider cropping volumes of data to the region of interest. However, seismic
data must meet the minimum requirement of at least 266x266 samples for any
labeled intersection to run ML.
6
2. Fault Extraction
2.1 Fault extraction process
The Machine learning assisted seismic interpretation workflow has a point cloud
approach, where ML based fault prediction cubes undergo a 3D geometrical
analysis, that enables for the extraction of faults as segmented single objects
that preserve the input resolution and fault plane geometry.
Run the Fault extraction process, which is available on the Seismic
Interpretation tab, in the Assisted interpretation group, with fault prediction
cube as input to extract faults. When fault point sets are extracted, you can
display them all at once or start looking into specific fault point sets one by one.
You can use the Fault point set editing Tool Palette to merge, split, show or
hide faults.
To get the result faster, the input fault point sets can be subsampled in the Fault
point sets subsampling process, which is available on the Seismic
Interpretation tab, in the Assisted interpretation group, under Fault extraction
tools.
Edited faults point sets are used as input for the Fault Framework process.
Figure 8: Edited faults point sets are used as input for the Fault Framework process.
Note: The Fault extraction license needs to be selected first, before Petrel is
open. If you select File/License module and select the license after, it will not
activate the feature.
Evaluate
Under Evaluate, you can create planarity and azimuth cubes from a fault
prediction cube and analyze them.
9
Planarity cube box
A planarity cube is used to extract and split faults at the intersections. It
highlights how planar a fault region is with a value range from 0 (no flatness
found in the search radius) to 1 (a completely flat plane in the search radius). To
have a better understanding of the values, under Interpolation method,
select None in the Settings dialog box for planarity and azimuth cubes, in
the Style tab.
Figure 10: Planarity cube: Example of fault point sets split at the intersection between yellow, green
and light blue patch with the Planarity value set to 0.55, while the light blue and violet patches are split
because of fast azimuth changes (compare next Azimuth figure).
10
Note: If faults are very steep, close to 90deg dip, the azimuth can sometimes
switch -180 degrees or +180 degrees in some areas due to local dip (dip,
calculated in very small radius). This causes tiny holes in the extracted fault
point sets.
Figure 12: Azimuth cube: Example of fault point sets split between light blue and violet patch
because of fast changes in azimuth value in alignment with the sectoring and merging optimization,
while the three other patches are split by planarity (compare with the previous Planarity figure).
Extraction parameters
You can use Azimuth and Fault parameters to have an impact on the output of
the Fault extraction process.
11
Azimuth range
Azimuth parameters are used to extract faults within the specified azimuth
sector range. You can define the parameter to extract all faults or only the faults
within a specific azimuth range by editing the sector start and sector end
values. The sector input is the true geographic north and calculated clockwise.
Select the Symmetrical check box to enable the additional extraction of faults
with opposing azimuth (that is, the opposing dip direction) compared to the
azimuth range specified by setting the sector start and end fields. When the
specified sector range is more than 180 degrees and the Symmetrical check
box is selected, all the faults are extracted.
Note: If faults are very steep, close to 90deg dip, the azimuth can sometimes
switch -180 degrees or +180 degrees in some areas due to local dip (dip,
calculated in very small radius). This causes tiny holes in the extracted fault
point sets.
Fault definition
Fault parameters define the final faults output.
The Planarity threshold is used to extract faults and split them at intersections.
The value range is [0; 1]:
• 0 means there is no planarity within the radius specified
under Evaluate.
• 1 means that fault regions are flat. Values above the specified
planarity threshold are used to extract and split faults at
intersections. Therefore, if the planarity threshold value is too low,
faults might not be correctly separated at intersections.
Other criteria for merging are based on the automatic analysis if a merge might
introduce branching effects in 3D, how rapidly the azimuth is changing, and
how well the patches fit into the fault. In addition, when merging, Fault extraction
performs a global optimization that aims to create faults that are consistent and
well-integrated to improve the result. Because of this, in most cases, fault
extraction subdivides a fault into geologically consistent patches, that either
stop at intersections or if the azimuth is changing too fast. The main goal of the
algorithm is to produce faults that are geologically consistent and not
necessarily huge faults that geologically might not belong together.
If the output of the extraction does not look geologically correct, you can then
merge or split manually. The Fault min. size removes extracted faults which
have fewer points than the specified value.
Advanced parameters
The provided extraction and advanced parameters are optimized. However,
you can use advanced parameters to address some special cases and
successfully extract complicated faults.
You can use the Sector size box to subdivide the defined sector range into
azimuth sectors. Each of these subdivided sectors can have a sector
12
overlap. Sector overlap must be less than half of the Sector size. First fault
patches are extracted inside these subsectors. In case a sector overlap is
specified, patches between the adjacent sectors are then automatically
merged if among other conditions the overlap in space exceeds the value of
the minimum overlap parameter. In general, you must not change the default
10° sector size for best quality. A bigger sector size than the default improves
the performance, but can degrade the output producing faults with a rapidly
changing azimuth, while a smaller sector size can split up faults in small patches
because of the resolution limit of the azimuth cube. The Patch min.
size parameter removes the extracted faults that are smaller than the specified
value within each defined sector.
The Min. overlap parameter defines the minimum overlap between two faults,
which are from neighboring sectors and represent the same fault. Only faults
that have a bigger overlap (in %) than the defined parameter are considered
mergeable by the algorithm.
Note: Not all the patches that fit the overlap criteria are merged. However, no
patch is merged that overlaps less than the specified minimum overlap.
Post-processing
During the Fault extraction process, you can optionally choose to subsample
fault point sets, create dip and azimuth attributes, or do both.
Subsample fault point sets
By default, this option is not selected. Select the check box and change the
parameters if required:
• Azimuthal sampling defines the sampling bin size in the strike
direction.
• Vertical sampling defines the sampling bin size in the depth
direction.
In the perpendicular direction, the fault is reduced to a width of one point.
Note: The default vertical sampling parameter is always defined in depth units.
The specified average velocity is used to convert these units into time to
subsample the fault point sets extracted in the time domain. If the specified
sampling values for azimuthal and vertical sampling are smaller than the original
sampling values of the input, then the output remains almost identical.
Subsampling is not a simple decimation. It preserves the main features of the
fault, but in a lower resolution.
Per sampling interval one new point is generated. This newly generated point is
freely and optimally placed inside the sample interval to preserve the main
feature of the fault. This is done with a weighted gaussian fitting, that not only
takes points inside the current sample interval into account, but also inside the
adjacent intervals. The original points closer to the center of the current
sampling interval have a higher influence on the placement of the newly
generated point than points outside of the interval. The broad influence area
13
ensures that the overall fault feature is preserved, even in areas with few points.
Edges of the fault are separately processed and preserved by focusing the
gaussian influence on the edge of the original fault. Therefore, the shape and
the limits of the fault is preserved as well.
Note: Subsampling is mainly an asynchronous process and locks Petrel only
when it adds subsampled faults into the Input pane.
Create dip and azimuth attributes for fault point sets
By default, the option is selected. You can change the parameters if required:
• Horizontal radius defines the horizontal radius used to calculate
dip and azimuth.
• Vertical radius defines the vertical radius used to calculate the
dip.
• Average velocity is used only for time domain objects. To
calculate dip for extracted fault point sets in time domain, the TWT
values are first internally converted to depth using the specified
average velocity.
Note: To get reliable dip and azimuth attributes, the horizontal and vertical
radius values must be at least twice the size of the sampling interval.
Azimuth in Dip are derived from the normal vector that is locally calculated in
the neighborhood defined by the radius parameters. In that area, a plane is fit to
the fault points using a Principal Component Analysis. The normal vector of that
plane describes the dip and azimuth.
Compared to the Azimuth calculation in the evaluation step, the post-
processing step gives a more precise estimation. Each fault in the post-
processing step is separately processed and, depending on the extend of the
radius, a geologically more meaningful dip and azimuth can be derived.
The azimuth is calculated according to the geologic definition of the strike and
defined clockwise from North in 360 degrees. The algorithm handles left- and
right-handed coordinate systems.
It follows the convention of the 'right hand rule' in geology. Therefore, the
azimuth is always defined with the dipping of the plane to the right when looking
in the strike direction. For example, a fault with a strike/azimuth towards north
(0°) will have a dip direction towards east (90°), and a fault with a strike south
(180°) will have a dip direction towards west (270°).
Note: Dip and azimuth calculation is mainly an asynchronous process and
locks Petrel only when it adds calculated attributes into the Input pane.
1 Open a 3D window.
2 Display extracted fault point sets.
3 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault point set editing. Or,
alternately, right-click the fault pointset in the 3D window and
select Fault point set editing from the Point set Mini toolbar.
Merge fault point sets is active by default.
The Fault point set editing Tool Palette opens.
4 In a 3D window, select several fault point sets to merge.
The selected point sets are highlighted.
Note: You can press Ctrl + Z to undo the selection. When you undo, this is
shared between the Merge and Split options and reverts changes in the
order the steps were applied. To apply the undo, make sure that either
the Merge option or Split option is activated on the Fault point sets
editing Tool Palette.
18
Figure 14 : Example of selected fault point sets to be merged.
5 Press Ctrl and select a selected point set one more time to clear
it.
6 When you have selected all the required fault point sets,
double-click to merge them.
The fault point sets targeted for merging are hidden from the 3D
window and are unchanged. You can find them in the original folder in
the Input pane, in the sub-folder Processed faults. The newly created
fault point sets are shown in the 3D window and located at the top of the
same Fault extraction folder.
Note: You can press Ctrl + Z to undo the merge operation. When you
undo, this is shared between the Merge and Split options and reverts the
changes in the order the steps were applied. To apply the undo, make
sure that either the Merge option or Split option is activated on the Fault
point sets editing Tool Palette.
19
Figure 15: Merged faults.
Figure 16: Fault point set editing Tool Palette with the active fault point set.
3 To define the split area, click around a point set to draw the polygon.
Note: You can press Ctrl + Z to reset the polygon, but the Split fault point
sets tool stays active. When you undo, this is shared between
the Merge and Split options and reverts the changes in the order the steps
were applied. To apply the undo, make sure that either the Merge option
or Split option is activated on the Fault point sets editing Tool Palette.
20
Figure 17: Example of splitting the selected fault.
21
Figure 18: A split fault.
Figure 19: Fault point set editing Tool Palette with an active fault folder.
23
• Use Ctrl and click to clear points in the filter and hide some of the
fault point sets displayed in the active window.
• Select Ctrl + Shift and click to apply a mirroring selection to show
fault point sets in the active window.
5 Select the fault point sets in the filter to show them in the active
window.
If you select and clear fault point sets in the Input pane, this is reflected in
the filter and the active window.
Note: If you select or clear fault point sets in the Input pane or hide
displayed fault point sets in an active window, this is reflected in the filter
selection. If you have several active windows and if in each window you
have applied a separate selection in the filter, the filter preserves these
selections for each window. If you close the filter and apply changes in an
active window, then reopen the filter, the filter is updated accordingly to
the current display in an active window.
The active fault is highlighted in green color in the filter.
6 Optional: When you have selected the points, move the data to
another folder or delete it. To do this, right-click the selected
fault point sets and select an option.
• Select Create a new subfolder and move selected fault
point sets to create a subfolder and move the selection into this
subfolder. You can enter a name for the subfolder in the dialog
box .
• Select Move selected fault point sets to to select an existing
subfolder under the currently used main folder and move the
selected fault point sets into this subfolder.
Note: Selected points are removed from the filter and hidden in
the active window when they are moved to a subfolder. If you want
to display these moved points in the filter, insert the subfolder as
input into the filter.
• Select Delete selected fault point sets to remove highlighted
points from the Fault dip/azimuth filter and permanently delete
these selected fault point sets from the project.
Note: Alternatively, select the Delete key to delete a selection.
Filter by size
You can use the Filter by size section to display the data by using the number
of points (size) per fault point set.
Note: The filters only show the fault point sets from the folder you have inserted,
but not its subfolders.
7 On the Seismic Interpretation tab, in the Assisted
interpretation group, select Fault extraction tools and, in the
list, select Fault extraction filters.
24
8 In the Fault point sets box, insert a folder that has extracted
fault point sets.
If a project contains an active folder with point sets, the Assisted
interpretation dialog box opens with the active folder selected as the
input.
9 Expand the Filter by size section.
Under the graph, the box on the leftmost side shows the minimum number of
points in the fault point sets from the input folder, and the box on the rightmost
side shows the maximum number of points.
You can use the options in the Filter by size section to select and clear points
in the filter.
• Select Select all in the filter to select all the points in the filter to
display them in an active window.
• Select Clear all selection in the filter to clear all the selected
points in the filter to remove it from the active window.
• Select Invert the current selection to invert the current
selection in the filter.
• Select Toggle log scale on the histogram to use the
logarithmic scale to display the data in the filter. Each bin shows
the [log10(number of fault point sets) + 0.1]. If there is a bin with no
fault point sets, then that bin is set to 0.
• In the Change number of bins box, enter a number to reduce or
increase the number of bins (the minimum is 1 and the maximum is
400) to display the data in the filter. Each bin has a size = (max
number of points - min number of points + 1) / number of bins
(rounded up).
10 To select the size, move the sliders.
The numbers are updated based on the minimum and maximum of the
currently selected data.
When the pointer is moved over the bins, tooltips are displayed that show
the Number of faults, the Bin range, and the Frequency.
• Number of faults: The number of faults in the highlighted bin.
• Bin range: The range between the minimum and maximum
number of point sets (fault size) in the fault point sets in one bin.
• Frequency: The frequency of occurrence (probability) that the
size of a fault point set falls into this bin.
11 Select Apply to show the changes in the active window and
the Fault dip/azimuth filter section.
Note: Apply is unavailable after the changes are applied in the active
window. It is available when a new selection is made in the Filter by
size section.
25
If you select or clear fault point sets in the Input pane or hide displayed
fault points set in an active window while the filter is open, a message
appears in the filter to notify you that the current selection is not
synchronized with the displayed faults in the active window and the
content of the active window will not be reflected in the selection state of
the filter. If you close the filter and apply changes in an active window, then
reopen the filter, the filter is not updated to the current display in an active
window and a message appears to notify you that the filter and window
states are not synchronized.
If you have several active windows and if in each window you applied a
different selection in the filter, the filter preserves these selections.
In both cases, Apply is available again and you can use it to apply the
current filter selection in the active window.
12 Optional: Select the Interactive check box to apply the
changes made in the Filter by size section when one of the
sliders is released.
If the Interactive check box is selected, the Apply button is unavailable.
13 Optional: Move your selections to another folder or delete it. To
do this, right-click the selected fault point sets and select an
option.
• Select Create a new subfolder and move selected fault
point sets to create a subfolder and move the selection into this
subfolder. You can enter a name for the subfolder in the Create a
new subfolder and move selected fault point sets dialog
box.
• Select Move selected fault point sets to to select an existing
subfolder under the currently used main folder and move the
selected fault point sets into this subfolder.
Note: Selected points are removed from the filter and hidden in an
active window when these selected faults point sets are moved to
a subfolder. If you want to display these moved points in the filter,
insert the subfolder as input to the filter.
• Select Delete selected fault point sets to remove highlighted
points from the Fault dip/azimuth filter and permanently delete
these selected fault point sets from the project.
Note: Alternatively, select the Delete key to delete a selection.
26
3. ML Horizon Prediction
Machine learning based horizon prediction
You can use Machine learning (ML) based horizon prediction to predict
horizons within 3D seismic data volumes. This process is run by Petrel and does
not use the external resources.
Traditional waveform trackers are cross-correlation based and can track only
one waveform at the same time. In addition, it can be over complicated, to use
them, with the parameters. The ML based horizon prediction algorithm can
track many waveforms at the same time. It is more powerful in capturing the
specific wave form around a reflector. It avoids cycle skipping compared to the
traditional trackers. With the minimum parameters it can deliver robust outputs.
Figure 20: Overview of the Horizon prediction workflow. Depends on the model quality value, several
iterations might be applied by the process to meet the value. When it is finished, to continue ML
based horizon prediction, reduce the model quality value, or add more labels to the same horizon
interpretation to give additional information to the algorithm.
27
NN Horizon prediction starts with picking a single or several points (labels) of a
targeted horizon.
When you select a label, interpreted as a point or several points along a seismic
section or several seismic sections, the method automatically distributes up to
1000 points randomly along this label. For each of these randomly picked
points, it not only uses a pixel itself, but it takes 51 samples as positive examples
(25 above the picked label, and 25 below the point). For each of these
randomly picked points, it automatically extracts the data in the background
class, called negative examples.
Figure 21: Training data generation randomly picked points with positive and negative examples
defined.
These positive and negative examples follow the event on the central point on
the event itself. For each of these randomly picked points, it automatically
extracts the data in the background class (shown in red). For each point, it
randomly extracts 6 background examples along the trace. It will provide sets
of positive and negative examples with a ratio of 1 to 6, which also increases
the total number of training data. It describes the information of what is in and
what is not in the horizon being tracked. So, when the training data has been
defined, it can then feed them into the model.
28
Figure 22: Training data generation positive and negative examples.
You can use Radial Basis Functions to predict a horizon confidence measure.
This method is a type of instance-based learning, where K's nearest neighbors
is probably the most well-known method. It relies on the pattern recognition in
the same trace as labels. It expands a horizon by evaluating the neighboring
values in the vertical direction and by calculating the confidence score values
for further expansion. The algorithm continues expanding a horizon while it
meets the given confidence value, which is the model quality parameter.
Running out of tracked points that match the specified criteria value stops the
tracking.
Radial Basis Functions is selecting class centroids based on the training data,
but it might have more than one centroid per class. Radial Basis Functions can,
therefore, represent high dimensional non-linear manifolds. The RBF classifier
uses many examples of seismic profiles to cover the background class and the
horizon class. By using a weighted sum of class densities for each class and
comparing the relative class density, a confidence measure for the horizon
class is produced. The example seismic profiles are produced by clustering
examples of profiles within each class and taking the cluster centroids with
corresponding weights as RBF centroids. The density function used is an
inverse quadratic of the centroid distance and is accumulated according to
centroid weights.
29
When the label has enough training data (more than 100), it runs the neural
network to produce a more robust model. Instead of learning patterns, as with
radial basis functions, the class information is captured in the neural net
weights. The neural network produces an output value in the range [0, 1], which
can be seen as the neural net confidence score. The algorithm feeds the picked
training data into one of the models for the training and then for the prediction.
The process is iterative, and it stops when it hits the model quality value that
minimizes the risk of following the wrong event while tracking. It can be used as
an indicator when tracking must be continued or stopped. You can increase the
model quality value from the default setting to track the horizon with higher
confidence values.
Note: Even if the number of input points is larger than 100, the RBF method still
runs, and it can be used for QC.
Note: Within each iteration, the process picks 1000 new training data in the
interpreted horizon and runs the training and prediction again. When new
labels are added, new labels are only considered by 50% at the very first
RBF iteration. When there are subsequent iterations, the training data is
selected from everywhere evenly.
You can access the Horizon prediction in the Assisted interpretation group on
the Seismic Interpretation tab.
Note: The Horizon extraction license needs to be selected first, before Petrel is
open. If you select File/License module and select the license after, it will not
activate the feature.
30
Figure 23: The Assisted seismic interpretation dialog box for Horizon prediction.
31
16 Insert a seismic horizon.
If a Petrel project contains an active seismic horizon, the Assisted seismic
interpretation dialog box opens with the active seismic horizon selected as
input.
Note: Depends on the data quality, seismic signal feature complexity and
specified model quality value, it can take several iterations of training and
prediction to get the final seismic horizon.
You can use a Fault cube (a cube which describes fault discontinuities, for
example a fault probability attribute cube) to stop horizon interpretation at
visible faults while tracking. Fault threshold defines a fault value. Every value
above the specified value is considered a fault, and everything below is
considered background.
You can use the Outputs options to create confidence score cubes for each
iteration to QC the result. The confidence score cube is the output of the model
given the seismic as input. Based on the given labels, a model is trained to get
high confidence at the picked reflector and low(er) confidence elsewhere.
While tracking a seismic event, the confidence is calculated on the fly to
evaluate if tracking should continue or not (confidence above 0.5 at reflector).
Create final confidence cube option creates the final RBF and NN cubes for the
last iteration. Create all confidence cubes option creates RBF and NN cubes for
each iteration. None option does not create confidence score cubes.
Note: Created RBF and NN cubes are always virtual. You can realize it if
needed. The realization process can take some time.
The RBF virtual cube represents the horizon confidence given the trained RBF
32
model, while the NN virtual cube represents the horizon confidence given the
trained NN model. The reason for having two different models is that RBF
models require little training data to give a robust model and can also be
modified on the fly while tracking to enforce trace-to-trace consistency.
An NN model requires more training data, e.g., more traces to produce a robust
model but is a more accurate model given enough training data. While tracking
both models are used: the NN model to predict horizon confidence regionally,
and the RBF model to predict trace-to-trace confidence locally. The RBF model
is modified on the fly to generate a local model for trace-to-trace confidence
prediction. The RBF virtual cube represents the non-local RBF model before
modification to a local model, in other words a regional RBF model. The NN
model is used as a generic regional model for the whole seismic cube, while the
RBF model is primarily used to enforce local consistency.
Note: The RBF model is not a neural network model, and that this is the reason
that it is well suited for on-the-fly transformation from a regional to a local
model.
18 Select Track to start the training and prediction.
Note: You can run the horizon prediction for different seismic horizon at the
same time. When Track is applied, use a different input to start another
horizon prediction. The process is asynchronous, and Petrel is only locked
while a horizon interpretation is updated in the Input pane.
The Undo option is available in the Horizon prediction dialog box and can be
used for horizon prediction steps only. The Undo option gets activated and can
be applied at least after one iteration run. Each undo click brings an output one
step back (one iteration back). Undo option stays available after Petrel is saved
and reopened and reflects different horizon prediction outputs.
Z or TWT attribute
The Z or TWT attribute describes the elevation of each point in a tracked
horizon, based on the domain of the input seismic data and in the project units.
Amplitude attribute
The Amplitude attribute describes the amplitude value of the input seismic data
in each point of a tracked horizon.
33
The Track Scores attribute describes the confidence score of each point in the
extracted horizon. This is the confidence value that is produced while tracking
by the prediction model at the position of the horizon. The same model is used
to calculate values for the confidence cube. Values might be different between
this attribute and the corresponding values extracted from the confidence
cube, because the values from the cube are calculated at sample positions and
subsequently interpolated at the horizon position.
34
*Mark of SLB.