Estimating Resources Using Basic Geostatistics
Estimating Resources Using Basic Geostatistics
To review or update your support cases, reference our knowledge base, check for known issues
and/or submit ideas, please log into the Hexagon Community at:
community.hexagonmining.com
Note: If you do not yet have access to the Community, please request access by clicking on
”request login”.
©2009-2023 by Leica Geosystems AG. All rights reserved. No part of this document shall be reproduced, stored in a retrieval system, or
transmitted by any means, electronic, photocopying, recording, or otherwise, without written permission from Leica Geosystems AG. All
terms mentioned in this document that are known to be trademarks or registered trademarks of their respective companies have been
appropriately identified. MinePlan® is a registered trademark of Leica Geosystems AG. This material is subject to the terms in the Hexagon
Mining Terms and Conditions (available at http://www.hexagonmining.com/).
Estimating Resources
Using
Basic Geostatistics
MinePlan: Exploration to Production
MinePlan software is a comprehensive mine planning platform offering integrated solutions for
exploration, modeling, design, scheduling and production. It uses raw data — from drillholes,
blastholes, underground samples and other sources — to derive 2D and 3D models essential to
mine design and planning. Below the ground or at the surface, from precious metals to base
metals, for coal, oil sands and industrial minerals, MinePlan software tackles geomodeling mining
applications to improve productivity at every stage of a mine’s life.
GEOMETRIES
Use digitized data to define geologic information in section or plan; define topography contours;
and define structural information, such as mine designs, important in the evaluation of an ore
body. Virtually every phase of a project, from drillholes to production scheduling, either uses or
derives geometric data. MinePlan software lets you create, manipulate, triangulate and view any
geometric data as 2D or 3D elements.
DRILLHOLES
Manage drillhole, blasthole and other
sample data in a Microsoft SQL Server
database. The data can be validated,
manipulated and reported; and it is
fully integrated with other MinePlan
products for coding, spearing, com-
positing, interpolation, statistics and
display. Some of the types of data
you can store are drillhole collar infor-
mation (location, length and more),
down-hole survey data (orientation),
assays, lithology, geology, geotechni-
cal data and quality parameters for
coal.
COMPOSITING
Calculate composites by several methods, including bench, fixed length, honoring geology and
economic factors. These composites are fully integrated with other MinePlan products for statistics
and geostatistics, interpolation and display.
©2023 Hexagon
Used to model base metal deposits such as por- Used to model layered deposits, such as coal and
phyry copper, non-layered deposits, and most oil sands. Although they are normally oriented hor-
complex coal and oil sands projects. izontally, they can be oriented vertically for steeply
dipping ore bodies.
Vertical dimensions are typically a function of the Vertical dimensions are a function of the seam
mining bench height. (or other layered structures) and interburden thick-
nesses.
Contains grade items, geological codes and a to- Contains elevations and thickness of seams (or
pography percent among other qualities and mea- other layered structures), as well as grade items, ge-
surements. ological codes, a topography percent, and other
qualities and measurements.
MODELING
Build and manage 3D block, stratigraphic and surface models to define your deposit. Populate
your models through: geometries (polygons, solids or surfaces) coded into the model; calculations
on model items; text files loaded into the model; and interpolation through techniques such as in-
verse distance weighting, kriging or polygonal assignment. As you design and evaluate your mine
project, you can update your model, summarize resources and reserves, calculate and report
statistics, display in plots or view in 2D and 3D.
vi
Estimating Resources Using Basic Geostatistics
SERVICES
vii
Contents
Geostatistics Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Common Geostatistical Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Initializing Sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Data Analysis & Graph Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Compositing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Drillhole Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Composite Capping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Declustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Calculating Variograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Downhole Variograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Variogram Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Variogram Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Model Interpolation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Kriging Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Interpolation Debug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Dynamic Unfolding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Visual Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Graphic Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Point Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Change of Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Model Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Conclusion & Future Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Estimating Resources Using Basic Geostatistics
ALTERATION ZONES
Phyllic 1
Potassic 2
Propylitic 3
LITHOLOGY CODES
Diorite 1
Granodiorite 2
Quartz Feldspar 3
Other* 4
*Intermediate Breccia, Late
Breccia, Country Rock
MINERALOGY CODES
PROJECT BOUNDARY COORDINATES (in metric units)
Oxides 1
Min Max Cell Size
Primary Sulfides 2
Easting: 3500 8500 (DX=25)
Secondary Sulfides 3
Northing: 4500 9500 (DY=25) Other (Outside) 4
Elevation: 705 1965 (DZ=15)
DRILLHOLE DATABASE
The drillhole database consists of 1034 drillholes collected over the course of two drilling cam-
paigns (one on the northwest side of the deposit and the other on the southeast). Drillhole types
include diamond, reverse circulation, hammer, mixed hammer and diamond. Samples were col-
lected at various lengths — from 1-meter to 15-meter intervals. Element sample analysis included
total copper, acid soluble copper, molybdenum and zinc.
Block size has been defined already based on drillhole spacing as 25x25x15 meters. In this class,
you will only use drillholes inside diorite, granodiorite and quartz class to create a model. Geo-
statistical analysis will be performed inside the three different mineralization types. Lithology and
mineralization solids have already been coded inside the drillholes and the model.
Before working through the exercises in this workbook, make drillhole views of the assays and overlay
topography. Then open the grid sets and display the drillholes in 2D views against the lithology and
mineralogy solids. Ensure that the drillholes have been coded with lithology and mineralogy codes.
NOTES:
Geostatistics
Geostatistics is the application of statistical methods or the collection and analysis of statistical
data for use in the earth sciences.
Universe
The universe is the source of all possible data. For our purposes, an ore deposit can be defined as
the universe.
Population
“Population” is synonymous with “universe” in that it refers to the total category under considera-
tion. It is the set of all samples of a predetermined universe within which statistical inference is to
be performed. It is possible to have different populations within the same universe based on the
support of the samples. For example, population of blast hole grades versus population of explo-
ration hole samples. Therefore, the sampling unit and its support must be specified in reference to
any population.
Sample
Random Variable
A random variable is a variable whose values are randomly generated according to a probabilistic
mechanism. It may be thought of as a function defined for a sampling process. For example, the
outcome of a coin toss or the future lab analysis result of the grade of a core sample in a diamond
drill hole can be considered as random variables.
Regionalized Variable
A regionalized variable is one outcome of all the random variables within the limits of the defined
universe. As the attributes considered in mining are generated by natural processes, the regional-
ized variables have spatial structure. The grade of ore, thickness of a coal seam, elevation of the
surface of a formation, are examples of regionalized variables. In geostatistics, it is assumed the
existing data is sampled from an unknown regionalized variable. The regionalized variable is used
to characterize the structured random aspect of the attribute studied consisting of irregular varia-
tions at locations where the attribute is not sampled. The main purpose of the use of regionalized
variables is to express the structural properties of the attributes studied in a framework in which it
can be statistically characterized.
Support
the term “support” refers to the size, shape and orientation of a sample. The support of a sample
can be small or large with respect to the dimension of the deposit. A change in any characteristics
of the support defines a new regionalized variable. For example, the channel samples gathered
from anywhere in a drift will not have the same support as the channel samples cut across the ore
vein in the same drift.
Frequency Distribution
The frequency distribution of an attribute shows the degree of occurrence of the attribute within
a range of values. There are two types of frequency distribution, probability density function (pdf)
and cumulative distribution function (cdf).
Summary Statistics
The general procedure used for describing a set of numerical data is referred to as “reduction of
the data.” This process involves summarizing the data by computing a small number of measures
that characterize the data and describing adequately for the immediate purposes of the analyst.
We use several statistics to describe a distribution. These statistics fall into three categories: mea-
sures of location, measures of variation or spread, and measures of shape. All statistics below refer
to the sample statistics.
MEASURES OF LOCATION
You can locate these measures on the distribution of the data.
Sample Mean
An average of all available samples. Average can be further be weighted to account for different
aspects of the nature of the samples, including preferential sampling, variability in support, and
sample recovery.
Median
The median is the midpoint of the observed values if they are arranged in increasing (or decreas-
ing) order. Therefore, half of the values of the distribution are below the median and half of the
values are above the median. The median can easily be read from a probability plot.
Mode
The mode is the value that occurs most frequently. The mode is easily located on a graph of
a frequency distribution. It is at the peak of the curve, the point of maximum frequency. On a
histogram, the class with the tallest bar can give a quick idea where the mode is.
MEASURES OF VARIATION
The other measurable characteristic of a set of data is the variation. Measures of variation or
spread are frequently essential to give meaning to averages. There are several such measures.
The most important ones are variance and standard deviation.
Sample Variance
The sample variance describes the variability of the data values. The variance is the average of
the squared difference between sampled values and the sample mean. As in the case of the
average, the sample variance can be weighted or non-weighted. Since the sample variance
involves the squared differences, it is sensitive to outlier values.
Standard Deviation
The standard deviation is the square root of the variance . It is often used instead of the variance
since its units are the same as the units of the attribute studied.
MEASURES OF SHAPE
In addition to measures of central tendency and variation, there are other measurable character-
istics of a set of data that may describe the shape of the distribution.
Skewness
The skewness is often calculated to determine if a distribution is symmetric. The direction of skew-
ness is defined as in the direction of the longer tail of the distribution. If the distribution tails to the
left, it is called negatively skewed, if it tails to the right, it is called positively skewed.
Peakedness
The peakedness or kurtosis is often calculated to determine the degree to which the distribution
curve tends to be pointed or peaked.
Coefficient of Variation
The coefficient of variation is the measure of relative variation. It is the standard deviation divided
by the mean and does not have a unit. Therefore, it can be used to compare the relative disper-
sion of values around the mean, among distributions described in different units. It may also be
used to compare distributions that, though measured in the same units, are of such difference in
absolute magnitude that comparison of the variability by the use of measures of absolute variation
is not meaningful.
If estimation is the final goal of a study, the coefficient of variation can provide some warning of
upcoming problems. A coefficient of variation greater than 1 indicates the presence of some
erratic high sample values or the presence of trends in the mean and variance, which may have
significant impact on the final estimates.
BIVARIATE STATISTICS
Sample Covariance
The variability between two sample variables is described by covariance.
Correlation Coefficient
Covariance is affected by the magnitude of the data values. Therefore, sometimes we use correla-
tion coefficient instead, which is like a normalized covariance and it will remove the magnitude of
the data values. The correlation coefficient ranges from -1 to 1, indicating high direct and inverse
correlation for values of 1 and -1, respectively, and 0 indicating no correlation.
NOTES:
Initializing Sigma
Sigma offers a package of statistical and geosta-
tistical tools. It is a stand-alone application, so it LEARNING OBJECTIVE
can run outside of MinePlan 3D (MP3D) or it can
be called from MP3D in the MinePlan Menu. Connect to a data source for statistical
and geostatistical analysis.
Sigma Projects
Creating a Sigma project creates a set of sub-
folders in the location you choose to start your
project. The main subfolder _sigmaresources will
store all graphs and necessary supporting files cre-
ated in the use of the tool.
The first time you use the tool, you will be asked if
you want to recreate the project. If you have al-
ready created a project, you will not be prompted.
Setup a data source Setup → New data source → Filters → Quick graphs
and quick graphs
Data Sources
Before you can use Sigma to create graphs, you must first connect to a data source. Once the
connection is made, you can run any number of Sigma graphs without reconnecting to the data
source. Sigma can manage many data sources at one time for your use.
There are many types of data sources to choose from. There are three types of drillholes sources,
four types of block model connections, and a general CSV connection. Sigma supports Torque
data and the older MinePlan files 09, 11, and 12. It can attach to block models that are normal,
sub-blocked, single ore percent, or multiple ore percent.
Creating a data source is basically a 3-step process; select your data source, general data source
setup, and quick graph setup. The first two steps are critical and the last is optional. To create your
data source, press the “PLUS” button and choose a type. Give you data source a name and then
select the appropriate model, drillhole, or ASCII data you wish to explore. To finalize, press “Set
Connection.”
Initializing Sigma | 7
©2023 Hexagon
General data source setup is where you can choose the items you wish to review and apply filters.
Start by choosing categorical and continuous variables of interest and setting up filters on the
second tab. The more limited your data source, the faster the creation of graphs will be. On the
overview tab, you will press “Process Data” to see the metadata for your chosen variables and
move on to the quick graphs step.
The final step is setting up the quick plots for the data. This can be done either with “simple” one
variable groupings or “matrix” style, where domains incorporate two or more categorical vari-
ables. Choose a few quick graphs from each style. You will be prompted for the global continuous
variables you wish to analyze for each category you choose. You may modify this for each bin if
you choose.
Connect to Data Press “+” and chose a data source type → Give it a name → Choose variables
→ Choose filters → Process Data
Setup Quick Graphs Setup quick graphs “simple” → Select categorical variables → Select continu-
ous variables
Setup More Quick Setup quick graphs “matrix” → Select categorical variables for rows and
Graphs columns → Select continuous variables
NOTES:
8 | Initializing Sigma
Estimating Resources Using Basic Geostatistics
EXERCISE:Create Histograms
Generate three histograms for total copper by mineralogy code. The bin width will be 0.1 with 15 bins. After
creating the three histograms, use the Overlay function to add them to the same chart.
View Data Statistics Overlay view → Statistics below graph → Statistics → Summary Statistics
View Cumulative Histogram view → Right click and view Cumulative Frequency Curve
Frequency
The Filter tab appears throughout Sigma. This tab allows you to set up multiple custom filters on data (e.g.
histogram of copper only where lithology = 2). To use a filter, open the Filter tab of the graph options,
click the + button and then choose your filter options.
Experiment with Filters Use the Data Filters tab → “+” icon → Variable “Mind Code”, OP “=”, Value
“1”, “2”, or “3”. Click Update. Generate a different graph for each value.
Box Plots
A box plot (also known as a box and whisker diagram) is a convenient way of graphically depicting
groups of numerical data through their quartiles. It displays the distribution of data based on the
five number summary: lowest non outlier, first quartile, median, third quartile, and highest non
outlier. Outliers maybe plotted as individual points.
Scatter Plots
One of the most common and useful presentations of bivariate data sets is the scatter plot. A
scatter plot is an x-y graph of the data on which the x-coordinate corresponds to the value of one
variable, and the y-coordinate to the value of another variable.
A scatter plot is used to determine if two variables are related or if there are unusual data pairs. In
the early stages of studying spatially continuous data set, it is necessary to check and clean the
data. Even after the data have been cleaned, a few erratic values may have a major impact on
estimation. The scatter plot can be used to help both in the validation of the initial data and in the
understanding of later results.
Quantile-Quantile Plots
Two distributions can be compared by plotting their quantiles against one another. The resultant
plot is called quantile-quantile, or simply q-q plot. If the q-q plot appears as a straight line, the two
marginal distributions have the same shape. A 45 degree line indicates further that their mean
and variances are also the same.
Build a QQ Plot Statistics Panel → QQ Plot → Input Variable X “CUID”, Y “MOI”, apply matching
filters for each axis in the Data Filters tab using the MINER item. Create a new
QQ plot for each MINER number.
Contact Plots
A contact plot is a plot of mean grades as a function of signed distance below a contact of a
given type. Typically, a contact plot is used to determine whether the grade transitions smoothly
across the contact type, or whether this is a discontinuity. Examples of discontinuity and transition
zones, respectively, are shown.
Build a Contact Plot Geostatistics Panel → Contact Plot → Input Variable “Total Copper”, Contact
variable Lith Code, Contact Values 1,2,3,4, step size 2.00, max distance 30 →
Apply
Change Styles Style tab → Enhancements → Add Bin Average points, Histogram Bars
Pivot Report
Pivot Reports are convenient to prepare tables for various reports. You can customize the format
in Sigma, and cut and paste into software like Microsoft Excel and Microsoft Word. The following
procedure demonstrates how to generate a basic univariate custom report. Univariate describes
an expression, equation, function or polynomial of only one variable. It is also termed a one-way
sensitivity analysis.
Create a Pivot Report Statistics → Pivot Report → Input Variable Total Copper → Row or Column
Variable Min Code. Select all 4 min code values.
Filter High Values Data Filters tab → Total Copper → Choose a value to filter out and click Up-
date. Investigate how the coefficient of variation changes.
Compositing
A composite is the weighted average of a set of
samples that fall within a defined boundary. The LEARNING OBJECTIVE
weighting factor is usually the sample length, but it
may also include sample specific gravity or other Composite drillhole data.
parameters.
Use composites, instead of samples, in the interpolation of the deposit model to provide a mining
basis for modeling, reduce the amount of data used and provide uniform support for geostatistics.
The composite length is a function of the variability of the data, which is a characteristic of the ge-
ology of the deposit. You can also add geologic codes through the drillhole view coding options
in MP3D, or by overlaying codes from other Torque coverages. Those composites are then ready
for interpolation directly in MinePlan.
MinePlan Torque offers a number of compositing interval methods: bench, seam, fixed length,
honor sample attribute, composite entire sample site, economic and samples to composites ap-
proach.
Compositing | 17
©2023 Hexagon
View the composites in MinePlan 3D (MP3D). Add strips and compare to assay strips.
Run statistics in Sigma and compare to the assay statistics. Use Probability plots per mineralogy to compare.
18 | Compositing
Estimating Resources Using Basic Geostatistics
Drillhole Spacing
Before building a model, it is important to ana-
lyze drillhole spacing to find an appropriate lag dis- LEARNING OBJECTIVE
tance for variogram calculation. Distances around
the drillhole spacing can be used as lag distances. Find an appropriate lag distance for vari-
ogram calculations by analyzing drillhole
spacing.
Open Compass Go to MinePlan → Compass → File → Open → stat.prj. Reset your Torque
database in the Setup tab.
Run procedure Run procedure p52201.dat, “Analyze Drillhole Spacing”. Choose Torque com-
posite input, sample site type Drillholes, Composite Set: Fixed. On area selec-
tion choose benches 26 to 63. Label Item = Total Copper, Search distance
on easting and northing = 100, on elevations 7.5 (half bench height). Exper-
iment with different Min Code filters in the Optional Data Selection for DH
Spacing Analysis page.
Drillhole Spacing | 19
©2023 Hexagon
Set Up as Sigma Data Use the Setup button in the Sigma Home tab. Use the “Create new data
Source source” button to select the CSV option and navigate to the dat522.csv.
Select the appropriate header row.
NOTES
20 | Drillhole Spacing
Estimating Resources Using Basic Geostatistics
Composite Capping
Sometimes it is necessary to restrict the influence of
outliers when building a resource model. You may LEARNING OBJECTIVE
want to consider capping composite values based
on probability plots or cut off analyses. Cap drillhole values to reduce the impact
of outliers.
Composite Capping | 21
©2023 Hexagon
Consider doing a sensitivity analysis on your cap value. Compare what happens to the mean
grade if you cap the data at different values. In Sigma, run box plots with different maximum val-
ues. Study the mean values and other stats after each run. Compare the cap value you chose in
previous exercise with the sensitivity plot.
22 | Composite Capping
Estimating Resources Using Basic Geostatistics
Composite Capping | 23
©2023 Hexagon
Declustering
Preferential drilling is often experienced in higher-
grade areas, which may result in clusters of sam- LEARNING OBJECTIVE
ples biased toward high grade values. If clustering
occurs at these high-grade areas, you will overes- Decluster your domains using the Cell
timate the mean; declustering gives the unbiased Decluster method in Sigma.
summary statistics of the population. This is an im-
portant step in resource modeling, as the rest of the
inference of the domain will be based on this correction of sampling bias. The declustered vari-
ance defines the sill of the variogram more appropriately than the raw variance, thus, affecting
how the range of a variogram is calculated. The method implemented in Sigma is called Cell
Declustering. It overlays cells (cubes) over a domain and assigns weights to the samples inversely
related to the number of samples in each cell.
The cell sizes are set in the X direction; the Y and Z directions are taken care of using an Anisotropy
ratio between those directions and X (by default the ratio is 1:1). The minimum cell size is usually
the closest distance in your drilling space and the maximum size is set to no more than half the
distance of the domain. The grid origin placement is by default generated at 25 different points,
Sigma calculates the average weight and displays it in the graph. For validation purposes, the
statistics from declustered composite values can be compared to the 3D block model statistics.
Create Cell Geostatistica tab → Cell Declustering → Choose Copper Composites data
Declustering Cart source → Grid Parameters: Min X cell size 0.5, max 5,000, number of cell sizes
100. In data filter, put Min Code != 4 to only include mineralized rock.
Export Cell Size Use the Export Cell Size option to create a weighted cell size file you can use
in other Sigma charts. Choose a cell size in the dropdown menu, click ex-
port, and save the file. Create two histograms of Cu values, one using the
“Decluster” option and select the decluster file just exported. Compare the
differences.
24 | Declustering
Estimating Resources Using Basic Geostatistics
Declustering | 25
©2023 Hexagon
Calculating Variograms
Several geostatistical tools — such as correlations,
covariance and variograms — describe the spatial LEARNING OBJECTIVE
continuity in an ore deposit. All of these tools use
summary statistics to describe how spatial continu- Build and view variogram maps to ana-
ity changes as a function of distance and direc- lyze the spatial continuity in an ore de-
tion. posit for ore reserve estimation.
The variables in earth sciences represent some sim-
ilarity (or dissimilarity) that exists between the value of a sample at one point and the value of an-
other sample some distance away. This expected variation can be called the spatial similarity or
spatial correlation. Variograms provide a means to measure the similarity or correlations of sample
values within a deposit, or rather within a homogeneous area of the deposit in which it is assumed
the geological relationships are the same or similar.
In simplest terms, a variogram measures the spatial correlation between samples. One possible
way to measure this correlation between two samples at point xi and xi + h taken h distance apart
is the function:
f1 (h) = 1/n ∑[Z(xi ) − z(xi + h)]
In this function, z(xi) refers to the assay value of the sample at point xi and h is the distance be-
tween samples. Thus, the function measures the average difference between samples h distance
apart. Although this function is useful, in many cases it may be equal or close to zero because the
differences cancel out. A more useful function is obtained by squaring the differences:
In this function, the differences do not cancel each other out, and the result will always be positive.
The second function was the variogram originally denoted as 2y(h). This was the variogram func-
tion originally denoted as 2γ(h). However, popular usage refers to semi-variogram γ(h) as being
the variogram. Therefore, throughout this chapter, variogram will refer to the following function:
Note that γ(h) is a vector function in three-dimensional space, and it varies with both distance
and direction. The number of samples, n, is dependent on the distance and direction selected to
accept the data.
Variograms will eventually help determine search parameters to apply in the model interpolation.
In the case of kriging, they will dictate the weighting of the composites falling inside the search for
each location of estimation.
26 | Calculating Variograms
Estimating Resources Using Basic Geostatistics
Variograms in Sigma
Sigma offers six types of variograms:
Calculating Variograms | 27
©2023 Hexagon
Traditional:
Traditional Standardized:
This is the same as a traditional variogram but the results are all divided by the maximum variability
of the data. This enforces a total sill of 1 on the data.
Madogram:
This is just like the traditional variogram except the absolute difference of the function is used
instead of the squared value. This can help correct for high outlier influence.
Correlogram:
By definition, the correlation function ρ(h) is the covariance function standardized by the appro-
priate standard deviations (so in a sense it is like a normalized covariance).
where σ−h is the standard deviation of all the data values whose locations are -h away from some
other data location:
σ2−h = 1/NΣ(v2i − m2−h )
σ+h is the standard deviation of all the data values whose locations are +h away from some other
data location:
The shape of the correlation function is similar to covariance function. Therefore, it needs to be
inverted to give a variogram type of curve, which we call correlogram. Since the correlation func-
tion is equal to 1 when h=0, the value obtained at each lag for correlation function is subtracted
from 1 to give the correlogram.
γ(h) = 1 − ρ(h)
These types of variograms are used to account for varying means. Pairwise Relative Variograms
and Local Relative Variograms scale the original variogram to some local mean value. This serves
to reduce the influence of very large values.
A relative variogram is obtained from the ordinary variogram by simply dividing each point on the
variogram by the square of the mean of all the data used to calculate the variogram value at
that lag distance. Pairwise relative variogram also adjusts the variogram calculation by a squared
mean. This adjustment, however, is done separately for each pair of sample values, using the
average of the two values as the local mean.
28 | Calculating Variograms
Estimating Resources Using Basic Geostatistics
where vi and v j are the values of a pair of samples at locations i and j, respectively.
The reason behind the computation of a relative variogram is an implicit assumption that the assay
values display proportional effect. In this situation, the relative variogram tends to be stationary.
If the relationship between the local mean and the standard deviation is something other than
linear, one should consider scaling the variograms by some function other than mean.
Indicator:
This variogram is calculated from data that has been coded (transformed into zero and ones)
using a series of indicator cutoffs or thresholds. They can be used to estimate the proportion of
different populations in a particular area.
At each point x in the deposit, consider the following indicator function of zc defined as:
i(x; zc ) = 1, i f z(x) ≤ zc
otherwise i(x; zc ) = 0,
where: x is location, zc is a specified cutoff value, z(x) is the value at location x.
Indicator Standardized:
This is the same as an indicator variogram set but the total sill is normalized to 1.
Calculating Variograms | 29
©2023 Hexagon
Global Variograms
Global variograms will give you an idea of the average of various directional variograms. They will
also give you an indication of spatial structure. The omnidirectional variogram is the “best” vari-
ogram you will see from your data, but it is not good enough to characterize directional continuity
that is true to your deposit.
It is not a strict average since the sample locations may cause certain directions to be over rep-
resented. For example, if there are more east-west pairs than north-south pairs, then the omni-
directional variogram will be influenced more by east-west pairs.
The calculation of the omni-directional variogram does not imply a belief that the spatial continu-
ity is the same in all directions. It merely serves as a useful starting point for establishing some of
the parameters required for sample variogram calculation. Since direction does not play a role
in omni-directional variogram calculations, one can concentrate on finding the distance param-
eters that produce the clearest structure. An appropriate class size or lag can usually be chosen
after few trials.
Another reason for beginning with omni-directional calculations is that they can serve as an early
warning for erratic directional variograms. Since the omni-directional variogram contains more
sample pairs than any directional variogram, it is more likely to show a clearly interpretable struc-
ture. If the omni-directional variogram does produce a clear structure, it is very unlikely for the
directional variograms to show a clear structure.
30 | Calculating Variograms
Estimating Resources Using Basic Geostatistics
Nugget Effect:
The nugget effect is a combination of:
• Short-scale variability that occurs at a scale smaller than the closest sample spacing
• Sampling error due to the way that samples are collected, prepared and analyzed The ratio
of the nugget effect to the sill is often referred to as the relative nugget effect and is usually
quoted in percentages.
Calculating Variograms | 31
©2023 Hexagon
Downhole Variograms
Downhole variograms are useful for studying vari-
ability at a very small distance. This information LEARNING OBJECTIVE
can be used for the directional variograms model-
ing process. It can help you determine the nugget Calculate downhole variograms and esti-
value that may not be apparent when calculating mate nugget effect.
the directional variograms.
Create Downhole Geostatistics tab → Downhole Variogram → Data Source Composites → Input
Variogram Variable Total Copper → Variogram type Traditional Standardized. In Lag
Parameters, use Type: Lag Distance, Lag Distance: 15, Number of Lags: 20,
and Lag Tolerance: 7.00. Create one for all Total Copper values, then create
one for each Mineralization type using the Data Filters tab.
Model Downhole Geostatistics tab → Variogram Fit → Add one of the Downhole Variograms.
Variogram Click Autofit. Investigate the calculate sill and nugget effect.
32 | Downhole Variograms
Estimating Resources Using Basic Geostatistics
Variogram Maps
The deposition of metals in ore bodies typically
follows three-dimensional structures. In mining, LEARNING OBJECTIVE
variogram maps are used to study these struc-
tures based on features captured along two- Learn to create variogram maps for zones
dimensional planes in the variogram space. To of interest.
have a better picture of the three-dimensional
structure, variogram maps are calculated along orthogonal planes. These variogram maps are
rotated in different configurations, with respect to the system origin, to find the main directions of
continuity.
Along with neutral models, variogram maps are calculated to explore general characteristics of
domains, including the presence of mean trends, the presence of sub-structures, and general spa-
tial continuity. These characteristics define whether the domain has a stationary behavior or not.
In case the domain presents non-stationary features, it is recommended to pre-process the data
source and re-calculate the variogram maps. For example, in the case of strong presence of a
mean trend, it is recommended to remove it from the data source and re-calculate the variogram
maps based on the residual values. Then, the domain needs to be estimated or simulated with the
residual values and add the trend later.
There are different metrics, other than the variogram, that is used to quantify the spatial conti-
nuity of the metal content attribute, including pairwise variogram, correlogram, and madogram.
However, these other metrics tend to overestimate the spatial continuity and their use does not
permit a thorough validation of the data source. These alternatives metrics are typically used in
preliminary studies when it is difficult to estimate the spatial continuity of the domain using the var-
iogram. In Sigma, for practicality, even when the metric used is not the variogram, the maps are
still referred to as variogram maps.
Variogram maps are simple to create and use. You will first create a variogram map based on
and appropriate filter set. You will then use automatic fitting to rotate the variogram map to a
reasonably correct direction. You can then adjust the direction to get a better fit. Finally, you will
export the variogram map to 3 principle variograms for the major, minor, and vertical directions.
These will be fitted with a variogram model and exported for use in interpolation.
Variogram Maps | 33
©2023 Hexagon
Create Variogram Map Geostatistics tab → Variogram Map. Use Torque Composites as the Data
source → Input Variable: Total Copper. Use the data filter tab to limit to
the separate mineralogy bodies. Once generated, click the “compute pro-
jection angles” link to calculate directions of least variance.
Run the Variogram Once the map is created, select the “Variogram Fitting Tool” button in the Pa-
Fitting Tool rameters tab of the variogram map. Name it after the filtered mineralogy of
the Variogram Map.
Fit the Variograms Click the “Update” tool to make the model active. You can then edit the
model by clicking on the red and black icons on the best fit line. Note that
the sill in all directions must be the same, however the range, or how quickly
the line reaches the sill, is adjustable for each direction. Use the “prorate”
button next to the Struct columns to automatically make the contribution
add to the sill.
34 | Variogram Maps
Estimating Resources Using Basic Geostatistics
Variogram Models
Practical usage of the experimental variogram re-
quires a description of the variogram by a math- LEARNING OBJECTIVE
ematical function or a model. Many models can
describe experimental variograms; however, some Model variograms in Sigma and view
models are more commonly used than others. model variograms in Sigma and MinePlan
3D.
Variogram Models | 35
©2023 Hexagon
Spherical Models:
This is the most commonly used model to describe a variogram. The definition is given by:
γ(h) = c0 + c i f h ≥ a
In this equation, c0 refers to the nugget effect, a refers to the range of the variogram, h is the
distance and c0 + c is the sill of the variogram. The spherical model has a linear behavior at small
separation distances near the origin, but flattens out at larger distances and reaches the sill at a,
the range. It should be noted that the tangent at the origin reaches the sill at about two-thirds of
the range.
Linear Models:
This is the simplest of the models. The equation of this model is as follows:
γ(h) = c0 + A(h)
In this equation, c0 is the nugget effect and A is the slope of the variogram.
Exponential Models:
This model is defined by a parameter a (effective range 3a). The equation of the exponential
model is:
γ(h) = c0 + c[1 − exp(−h/a)]h > 0
This model reaches the sill asymptotically. Like the spherical model, the exponential model is linear
at very short distances near the origin. However, it rises more steeply and then flattens out more
gradually. It should be noted that the tangent at the origin reaches the sill at about two-fifths of
the range.
Gaussian Models
√
This model is defined by a parameter a, (effective range a 3). The equation of the Gaussian model
is given by:
y(h) = c0 + c[1 − exp(−h2 /a2 )]h > 0
Like the exponential model, this model reaches the sill asymptotically. The distinguishing feature of
the Gaussian model is its parabolic behavior near the origin.
Nested Structures
A variogram function can often be modeled by combining several variogram functions:
For example, there might be two structures displayed by a variogram. The first structure may de-
scribe the correlation on a short scale. The second structure may describe the correlation on a
much larger scale. These two structures can be defined using a nested variogram model. In us-
ing nested models, one is not limited to combining models of the same shape. Often the sample
variogram will require a combination of different basic models. For example, one may combine
36 | Variogram Models
Estimating Resources Using Basic Geostatistics
spherical and exponential models to handle a slow rising sample variogram that reaches the sill
asymptotically.
Variogram Models | 37
©2023 Hexagon
Sectional view
3D view
38 | Variogram Models
Estimating Resources Using Basic Geostatistics
Interpolation Techniques
Interpolating the model is the only way to transfer
the composite grades or qualities into the 3D block LEARNING OBJECTIVE
model (3DBM). Different types of interpolation rou-
tines are available in MinePlan. This course will Interpolate a 3D block model using in-
cover inverse distance weighting (IDW) and krig- verse distance weighting and kriging.
ing.
Ordinary kriging is designed primarily for the local estimation of block grades as a linear combina-
tion of the available data in or near the block, such that the estimate is unbiased and has minimum
variance. Ordinary kriging is linear because its estimates are weighted linear combinations of the
available data; it is unbiased; it is best because it aims to minimize the variance of errors.
The conventional estimation methods, such as the inverse distance weighting, are also linear and
theoretically unbiased. The distinguishing feature of ordinary kriging from conventional linear esti-
mation methods is its aim of minimizing the error variance.
INVERSE DISTANCE WEIGHTING
Inverse distance weighting(IDW) is a common estimation method. Each sample weight is inversely
proportional to the distance between the sample and the point being estimated. The equation is
as follows:
p p
z∗ = [∑(1/di )z(xi )]/ ∑(1/di )i = 1, . . . , n
In this equation z* is the estimate of the grade of a block or a point, z(xi ) refers to sample grade, p
is an arbitrary exponent, and n is the number of samples.
Interpolation Techniques | 39
©2023 Hexagon
KRIGING ESTIMATOR
The kriging estimator is a linear estimator of the following form:
z∗ = ∑ λi z(xi ) i = 1, . . . , n
In this equation, z∗ is the estimate of the grade of a block or a point, z(xi ) refers to sample grade,
λi is the corresponding weight assigned to z(xi ), and n is the number of samples. The weighting
process of kriging is equivalent to solving a constrained optimization problem where the objective
function is to minimize the error σ2 = F(λ1 , λ2 , λ3 ..., λn ) and subject to Σλi i = 1 in the case of ordinary
kriging.
This constraint optimization problem can be readily solved by using Lagrange multipliers.
KRIGING SYSTEM
Ordinary kriging can be performed for estimation of a point or a block. The linear system of equa-
tions for both cases is similar.
POINT KRIGING
The point kriging system of equations in matrix form can be written in the following form:
C ∗ λ = D
C11 ··· C1n 1 λ1 C10
. .. .. .. . .
.. . . . ∗ . = ..
.
···
Cn1 Cnn 1 λn Cn0
1 ··· 1 0 µ 1
The matrix C consists of the covariance values Ci j between the random variables Vi and V j at
the sample locations. The vector D consists of the covariance values Ci0 between the random
variables Vi at the sample locations and the random variable V0 at the location where an estimate
is needed. The vector λ consists of the kriging weights and the Lagrange multiplier. It should be
noted that the random variables Vi , V j , and V0 are the models of the phenomenon under study.
40 | Interpolation Techniques
Estimating Resources Using Basic Geostatistics
BLOCK KRIGING
The difference between block kriging and point kriging is that the estimated point is replaced by
a block. Point-to-block correlation is the average correlation between a sampled point, i, and all
points within the block. In practice, a regular grid of points within the block is used. Consequently,
the matrix equation includes point-to block correlations.
The block kriging system is similar to the point kriging system given above. In point kriging, the
covariance matrix D consists of point-to-point covariances. In block kriging, it consists of block-to-
point covariances.
The covariance values CiA is no longer a point-to-point covariance like Ci0 , but the average covari-
ance between a particular sample and all of the points within block A.
Kriging
C11 C12 1 W1 C1B
C21 C22 1 ∗ W 2 = C2B
1 1 0 µ 1
C0 = 0.2
C1 = 0.8
RY = 500m
RX = 150m
RZ = 150m
R1/R2/R3 = 90/0/0
Spherical
1.0 0.1 1 W1 0.56
0.1 1.0 1 ∗ W 2 = 0.12
1 1 0 µ 1
Grades Weights
1.1 0.744 (w1)
0.5 0.256 (w2)
Relative location is important. Grade 1.1 gets most of the weight because of RY = 500m at
90◦ rotation (Y axis is rotated 90◦ to the east).
Covariances Ci j between samples and covariances Cib between samples and blocks are calcu-
lated from the variogram function. Covariance function and variogram function are related by
the following formula:
Interpolation Techniques | 41
©2023 Hexagon
Data Setup
When creating a new model interpolation using MIT, you will first be prompted with the data setup.
In this panel you can connect to your model as well as your data input. Data input options include
Torque composites, 3D points, downhole points, or MinePlan composites (file 9). MIT can be used
to set up a simple IDW run, kriging, and more advance options. The interpolation method can also
be defined in the data setup panel.
Primary Search
The primary search panel allows the user to define a search radius around each block. This can
be a spherical search or an ellipsoidal search allowing the user to adjust the search distance de-
pendent on orientation. Here, the user can also apply dynamic unfolding or a relative coordinate
transform.
Choosing the ellipsoidal search option will further reduce the influence of composites along the
minor axis. This is less important in a kriging interpolation as the variograms already handle sample
weighting. In the example below, COMP2 is rejected in general because it is out of the ellipsoidal
search. Using an ellipsoidal search will make a difference after composites are selected. In both
IDW and Kriging case, it will sort composites based on the anisotropic distances, then it will pick the
closest ones if the selection of composites exceeds the max number of composites to use. In the
IDW case, distances used in the IDW formula will also be the adjusted ones. In the case of kriging,
the weighting is handled by the variogram (which be default will be anisotropic if different ranges
are used for different directions). All storing distances are also anisotropic.
Selection Rules
The selection rules panel allows the user to define the minimum and maximum number of com-
posites required per block as well as the maximum number of composites per hole. The maximum
distance to the closest composite can also be constrained (0 is default and will not apply this rule).
Closest composite is within PAR7. Block Closest composite outside PAR7. Block
can be interpolated. cannot be interpolated.
Octant Search
In the octant panel, you may further split your search volume into segments. The figures labeled
1,2,3,4 shows the various splits.
1 2 3 4
Octant Quadrant Split Octant Split Quadrant
Composites - Filters
Composites used in the interpolation can be filtered out based on attribute values in the compos-
ite.
Calculations – Variography
If you have a variogram file (generated in SIGMA or MSDA) you can define it here.
If you are running this procedure without a variogram parameter file, you can choose to build
the variogram prompting you to define the variogram parameters you have previously calculated
outside of MinePlan. Check the “use variogram file” option if you had previously exported a .var
file from Sigma.
Advanced - Outliers
Outliers are used to restrict grades above or below a cutoff. It is applied to the primary item used
for interpolation (default is 0 for high and low cutoffs which means no outlier logic is applied).
Repeat interpolation for the rest of the mineralizations. Use relative elevation items if needed. The procedure
relev.dat calculates relative items from a surface and stores them back to the model and drillholes.
Kriging Variance
For each block or point kriged, a kriging variance is
calculated. The block kriging variance is given by: LEARNING OBJECTIVE
Where
CAA is the variance of the domain at the scale of
the estimate. In practice, this average block-to-block variance is also approximated by discretizing
the area A into several points. It is important to use the same discretization for the calculation of
point to block covariances in D in the kriging equations. If one uses different discretization for the
two calculations, there is a risk of getting negative error variances.
C1A is the covariance between sample and the Area (block) of estimation.
λi is the kriging weight for each sample.
and
µ is the Lagrange multiplier from the kriging equations.
For the point kriging variance, CAA variance is replaced by the variance of the point samples, or
simply by the sill value of the variogram. CiA is replaced by Ci j (point to point covariance).
Kriging variance does not depend directly on the data. It depends on the data configuration.
Since it is data value independent, the kriging variance only represents the average reliability of
the data configuration throughout the deposit. It does not provide the confidence interval for
the mean unless one makes an assumption that the estimation errors are normally distributed with
mean zero.
However, if the data distribution is highly skewed, the errors are definitely not normal because one
makes larger errors in estimating a higher-grade block than a low-grade block. Therefore, the relia-
bility should be data value dependent, rather than data value independent. For a fixed sampling
size, different sampling patterns can produce significantly different estimation variances. In two
dimensions, regular patterns are usually at the top of the efficiency scale in terms of achieving a
given estimation variance with the minimum number of data, while clustered sampling is the most
inefficient.
52 | Kriging Variance
Estimating Resources Using Basic Geostatistics
Where:
σ2z is the sample variance.
Ci, j is the covariance between samples.
Ci,o is the covariance between samples and location of estimate.
This is essentially the same formula as the initial formula. In the case of block kriging, σ2z is the block
variance. This formula simply says that the kriging variance (error of estimation) is:
The data variance + the covariance between samples - the covariance between the samples
and the block.
Therefore:
• As variance of data increases, error will also increase – 1st positive component
• As covariance between data increases, error will also increase. High covariance between
data means that data may be clustered, thus they should produce higher error -2nd negative
component
• As data gets closer to block, covariance between a composite and block increases, thus
error decreases – 3rd positive component
Kriging variance should not be used by itself to assess confidence or classify reserves.
Kriging Variance | 53
©2023 Hexagon
Interpolation Debug
Debugging the interpolation run is imperative to
fully understand how changes in the search pa- LEARNING OBJECTIVE
rameters affect the interpolation results. By de-
bugging a run, the user will be able to make a list Debug an interpolation run.
of composites used for interpolating a block and,
more importantly, make a visual representation of
the search parameters.
Change some of the interpolation parameters and check the debug report each time. Evaluate the effect of
the change. Experiment with techniques such as reducing the maximum number of samples or the number
of composites per drillhole.
54 | Interpolation Debug
Estimating Resources Using Basic Geostatistics
Dynamic Unfolding
Complex geology such as overturn folds, ver-
tical intrusions, and non-parallel top/bottom
can be accounted for during interpolation in
MinePlan by applying Dynamic Unfolding.
Dynamic Unfolding uses the surfaces gener-
ated by the Relative Surface Interpolator (RSI)
to calculate distance and direction along
those surfaces, then use the results in interpo-
lation.
The GeoTools MinePlan menu contains the Dynamic unfolding tool. Within the tool, RSI collection
creates surface geometries from a single surface or between two limiting surfaces (or with a sliced
solid). Fast Marching calculates the unfolded distance and direction between every composite
and every point on a grid covering the surfaces. Grid spacing controls the accuracy of the result:
The finer the grid spacing, the more accurate the unfolding results, but the longer it takes to carry
out the unfolding computation.
Dynamic unfolding is important for interpolation, variography, and coding when complex geol-
ogy is present. It improves representation of grade trends, incorporation of geology, and overall
understanding of the deposit.
Dynamic Unfolding | 55
©2023 Hexagon
NOTES
EXERCISE: Validate the Model with a Histogram and Grade Tonnage Curves
Make histograms and g/t curves in Sigma for inverse distance, kriging and polygonal total copper grades
and compare them. How close did the estimation methods come to the polygonal? The polygonal method
represents the declustered composite distribution. Theoretically, further adjustment may be needed in the
polygonal distribution to accommodate the change of support (volume variance correction).
Create Gridsets Create gridsets in the 3 principle directions based on the PCF. Delete any grid-
sets outside of the mineralized zone.
Create Swath Plot Sigma → Statistics tab → Swath Plot → Select stat15.dat block model and
CUID, CUKRG and CUPLY as inputs → Define swaths → Select Grid set →
Update.
Tonnage Mean
>=0.000 302,995,500.0 0.255
>=0.100 259,518,225.0 0.289
>=0.200 190,090,125.0 0.340
Oxide >=0.300 100,259,300.0 0.426
>=0.500 22,404,720.0 0.618
>=0.800 1,532,794.0 0.982
>=1.000 447,300.0 1.250
>=0.000 2,614,684,000.0 0.184
>=0.100 2,382,713,000.0 0.195
>=0.200 951,586,100.0 0.267
Primary
>=0.300 233,433,375.0 0.362
Sulfides
>=0.500 14,197,540.0 0.582
>=0.800 603,787.5 0.931
>=1.000 135,000.0 1.147
>=0.000 722,918,700.0 0.289
>=0.100 703,430,400.0 0.295
>=0.200 531,694,200.0 0.341
Secondary
>=0.300 278,990,200.0 0.427
Sulfides
>=0.500 57,344,060.0 0.653
>=0.800 8,917,500.0 0.970
>=1.000 3,032,813.0 1.159
>=0.000 3,640,599,000.0 0.211
>=0.100 3,345,661,000.0 0.223
>=0.200 1,673,370,000.0 0.299
Total >=0.300 612,682,800.0 0.402
>=0.500 93,946,320.0 0.634
>=0.800 11,054,080.0 0.970
>=1.000 3,615,113.0 1.170
Point Validation
The point validation technique predicts a known
data point using an interpolation plan; surround- LEARNING OBJECTIVE
ing data points are used to estimate the loca-
tion of the point. This technique checks how Use the point validation technique to
well the estimation procedure can be expected compare interpolation scenarios.
to perform. It may suggest improvements, but
it mainly compares interpolation scenarios and
does not determine parameters. It reveals weak-
nesses/shortcomings.
Remember that all conclusions are based on observations of errors at locations where you do
not need estimates. You remove values that you are going to use, so the results are generally
pessimistic.
62 | Point Validation
Estimating Resources Using Basic Geostatistics
Point Validation | 63
©2023 Hexagon
Change of Support
The term “support” at the sampling stage refers
to the characteristics of the sampling unit, such LEARNING OBJECTIVE
as the size, shape and orientation. For example,
channel samples and diamond drill core samples Validate your model using change of sup-
have different supports. At the modeling and mine port techniques.
planning stage, support refers to the volume of the
blocks used for estimation and production. It is important to account for the effect of the support
in our estimation procedures, since increasing the support has the effect of reducing the spread of
data values. As the support increases, the distribution of data gradually becomes more symmetri-
cal. The only parameter that is not affected by the support of the data is the mean. The mean of
the data should stay the same even if we change the support.
There are some methods available for adjusting an estimated distribution to account for the sup-
port effect. The most popular ones are affine correction and indirect lognormal correction. All of
these methods have two features in common:
1. They leave the mean of the distribution unchanged.
2. They change the variance of the distribution by some “adjustment” factor.
Therefore, a further adjustment to the polygonal estimation can be performed before we overlay
it to the estimation distribution in the form of a Grade-Tonnage Curve. The polygonal distribution
represents the declustered composite distribution, but a further adjustment is needed to accom-
modate the change of support. (We are using points-composites to estimate a block.) Therefore,
a variance adjustment factor f can be used.
The formula used depends on whether we want to increase or decrease the variance.
64 | Change of Support
Estimating Resources Using Basic Geostatistics
Affine correction
The affine correction is a very simple correction method. Basically, it changes the variance of the
distribution without changing its mean by simply squeezing values together or by stretching them
around the mean. The underlying assumption for this method is that the shape of the distribution
does not change with increasing or decreasing support.
The affine correction transforms the z value of one distribution to z’ of another distribution using the
following linear formula:
√
z0 = f ∗ (z − m) + m
where m is the mean of both distributions. If the variance of the original distribution is σ2 , the
variance of the transformed distribution will be f*σ2 .
Indirect Lognormal Correction
The indirect lognormal correction is a method that borrows the transformation that would have
been used if both the original distribution and the transformed distribution were both lognormal.
The idea behind this method is that while skewed distributions may differ in important respects
from the lognormal distribution, change of support may affect them in a manner similar to that
described by two lognormal distributions with the same mean but different variances. The indirect
lognormal correction transforms the z value of one distribution to z’ of another distribution using
the following exponential formula:
z0 = azb
√ √
a = [m/ ( f ∗ cv2 + 1)] ∗ [ (cv2 + 1)/m]b
√
b = [ln( f ∗ cv2 + 1)/ln(cv2 + 1)]
where cv is the coefficient of variation, m is the mean and f is the variance adjustment factor.
One of the problems with the indirect lognormal correction method is that it does not necessarily
preserve the mean if it is applied to values that are not exactly lognormally distributed. In that
case, the transformed values may have to be rescaled using the following equation:
z00 = m/m0 ∗ z0
Change of Support | 65
©2023 Hexagon
The variograms we used in estimation were calculated by the correlation function. What do you need
to do to convert them to non-normalized variance, and why is it okay if you do so?
For the affine correction, you would need the block to polygonal variance ratio and the average grades
of the polygonal blocks. For indirect lognormal correction, you also need the coefficient of variation of the
polygonal block values. Store to a new model item. For indirect lognormal correction, you may need to use
a factor to multiply the grades to keep the mean the same, the problem being that if correction is applied to
a distribution that is not exactly lognormally distributed, correction does not preserve the mean.
66 | Change of Support
Estimating Resources Using Basic Geostatistics
Change of Support | 67
©2023 Hexagon
Model Classification
Classifying resources may be a subjective task. Var-
ious international reporting codes provide some LEARNING OBJECTIVE
guidance on the standard of work and levels of
uncertainty required to classify resources into Mea- Make model calculations and create a
sured, Indicated and Inferred categories. When multi-run for resource classification.
Resources are converted to Reserves, they are
then classified as Proven and Probable. Some of the model items that can be used for resource
classification are:
• Kriging Variance
• Distance to the closest composite
• Number of composites used
• Average distance to the block
• Furthest Distance of a composite to the block
• Number of drillholes used
• Number of octants
• Pass number
• Relative Variability index using Combined Variance
• Dilution Index
68 | Model Classification
Estimating Resources Using Basic Geostatistics
Model Classification | 69
©2023 Hexagon
You may want to consider using some kind of a relative variability index to incorporate the actual
variance of the data. One of the disadvantages of using the kriging variance is that it is indepen-
dent of the actual values of the composites (it is calculated directly from the variogram).
70 | Model Classification
Estimating Resources Using Basic Geostatistics
MinePlan offers a Combined Variance calculation that can then be used in combination with the
estimation to create some kind of an index for classification purposes.
Model Classification | 71
©2023 Hexagon
72 | Model Classification
Estimating Resources Using Basic Geostatistics
Future Training
Whether it takes a few hours or a few days, training with Hexagon’s newest tools can pay instant
dividends. Designed to fit your schedule, our mix-and-match formats support your learning needs
no matter what your expertise with MinePlan software.
Spend some time using our software in day-to-day applications. When you are comfortable
working with MinePlan software, contact us at hexagon.com/company/contact-us/professional-
services to set up your next training.
©2009-2023 by Leica Geosystems AG. All rights reserved. No part of this document shall be reproduced, stored in a retrieval system, or
transmitted by any means, electronic, photocopying, recording, or otherwise, without written permission from Leica Geosystems AG. All
terms mentioned in this document that are known to be trademarks or registered trademarks of their respective companies have been
appropriately identified. MinePlan® is a registered trademark of Leica Geosystems AG. This material is subject to the terms in the Hexagon
Mining Terms and Conditions (available at http://www.hexagonmining.com/).