0% found this document useful (0 votes)
10 views84 pages

Estimating Resources Using Basic Geostatistics

The document provides an overview of estimating resources using basic geostatistics, detailing the functionalities of MinePlan software for mine planning and modeling. It includes information on managing drillhole data, creating 3D models, and optimizing pit designs, along with global technical support and training services offered by Hexagon Mining. The document also outlines various geostatistical methods and techniques essential for effective resource estimation.

Uploaded by

Oscar Molina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views84 pages

Estimating Resources Using Basic Geostatistics

The document provides an overview of estimating resources using basic geostatistics, detailing the functionalities of MinePlan software for mine planning and modeling. It includes information on managing drillhole data, creating 3D models, and optimizing pit designs, along with global technical support and training services offered by Hexagon Mining. The document also outlines various geostatistical methods and techniques essential for effective resource estimation.

Uploaded by

Oscar Molina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Estimating Resources

Using Basic Geostatistics


Contact Us

Technical Support Email - English Technical Support Email - Spanish


[email protected] [email protected]

Australia Tech Support Chile Tech Support


+61.7.4167.0076 +56.22.898.6072

Canada Tech Support Mexico Tech Support


+1.604.757.4394 +52.55.8421.0747

South Africa Tech Support Peru Tech Support


+27.87.550.4441 +51.1.700.9844

USA Tech Support Prominas: Brazil Tech Support


English: +1.520.729.4396 +55.31.3497.5092
Spanish: +1.520.448.4396 [email protected]

To review or update your support cases, reference our knowledge base, check for known issues
and/or submit ideas, please log into the Hexagon Community at:
community.hexagonmining.com
Note: If you do not yet have access to the Community, please request access by clicking on
”request login”.

For training inquiries, please visit:


hexagon.com/company/contact-us/professional-services

Estimating Resources Using Basic Geostatistics


Updated: April 12, 2023

©2009-2023 by Leica Geosystems AG. All rights reserved. No part of this document shall be reproduced, stored in a retrieval system, or
transmitted by any means, electronic, photocopying, recording, or otherwise, without written permission from Leica Geosystems AG. All
terms mentioned in this document that are known to be trademarks or registered trademarks of their respective companies have been
appropriately identified. MinePlan® is a registered trademark of Leica Geosystems AG. This material is subject to the terms in the Hexagon
Mining Terms and Conditions (available at http://www.hexagonmining.com/).
Estimating Resources
Using
Basic Geostatistics
MinePlan: Exploration to Production
MinePlan software is a comprehensive mine planning platform offering integrated solutions for
exploration, modeling, design, scheduling and production. It uses raw data — from drillholes,
blastholes, underground samples and other sources — to derive 2D and 3D models essential to
mine design and planning. Below the ground or at the surface, from precious metals to base
metals, for coal, oil sands and industrial minerals, MinePlan software tackles geomodeling mining
applications to improve productivity at every stage of a mine’s life.

GEOMETRIES
Use digitized data to define geologic information in section or plan; define topography contours;
and define structural information, such as mine designs, important in the evaluation of an ore
body. Virtually every phase of a project, from drillholes to production scheduling, either uses or
derives geometric data. MinePlan software lets you create, manipulate, triangulate and view any
geometric data as 2D or 3D elements.
DRILLHOLES
Manage drillhole, blasthole and other
sample data in a Microsoft SQL Server
database. The data can be validated,
manipulated and reported; and it is
fully integrated with other MinePlan
products for coding, spearing, com-
positing, interpolation, statistics and
display. Some of the types of data
you can store are drillhole collar infor-
mation (location, length and more),
down-hole survey data (orientation),
assays, lithology, geology, geotechni-
cal data and quality parameters for
coal.

COMPOSITING
Calculate composites by several methods, including bench, fixed length, honoring geology and
economic factors. These composites are fully integrated with other MinePlan products for statistics
and geostatistics, interpolation and display.
©2023 Hexagon

3D BLOCK MODEL (3DBM) STRATIGRAPHIC MODEL

Used to model base metal deposits such as por- Used to model layered deposits, such as coal and
phyry copper, non-layered deposits, and most oil sands. Although they are normally oriented hor-
complex coal and oil sands projects. izontally, they can be oriented vertically for steeply
dipping ore bodies.

Vertical dimensions are typically a function of the Vertical dimensions are a function of the seam
mining bench height. (or other layered structures) and interburden thick-
nesses.

Contains grade items, geological codes and a to- Contains elevations and thickness of seams (or
pography percent among other qualities and mea- other layered structures), as well as grade items, ge-
surements. ological codes, a topography percent, and other
qualities and measurements.

MODELING
Build and manage 3D block, stratigraphic and surface models to define your deposit. Populate
your models through: geometries (polygons, solids or surfaces) coded into the model; calculations
on model items; text files loaded into the model; and interpolation through techniques such as in-
verse distance weighting, kriging or polygonal assignment. As you design and evaluate your mine
project, you can update your model, summarize resources and reserves, calculate and report
statistics, display in plots or view in 2D and 3D.

ECONOMIC PIT LIMITS & PIT OPTIMIZATION


Generate pit shells to reflect economic designs. Using floating cone or Lerchs-Grossmann tech-
niques, work on whole blocks from the 3D block model to find economic pit limits for economic
assumptions such as costs, net value, cutoff grades and pit wall slope. Economic material is usually
one grade or an equivalent grade item. You can view the results in 2D or 3D, use the results to
guide your phase design, plot your design in plan or section, calculate reserves and run simple
production scheduling on your reserves.

vi
Estimating Resources Using Basic Geostatistics

PIT & UNDERGROUND DESIGN


Accurately design detailed open pit geometry, in- Support & Services
cluding ramps and pushbacks with variable wall
slopes, and display your pit designs in plan or section,
clipped against topography or in 3D. You can eval- Client service and satisfaction is our
uate reserves for pit designs based on a partial block first priority. Boasting a multilingual
basis and calculate production schedules from the re- group of geologists and engineers
serves. Create and manipulate underground design stationed worldwide, the MinePlan
through CAD functions and from survey information. team has years of hands-on, real-
world experience.
LONG TERM PLANNING GLOBAL SUPPORT
Generate schedules for long term planning based on
Providing global technical support
pushback designs, or phases, and reserves computed
during the day and with extended
by the mine-planning programs. The basic input pa-
hours on weekdays and weekends,
rameters for each production period include mill ca-
technical support is at your service.
pacity, mine capacity and cutoff grades.
The company’s offices in the United
States, Canada, Mexico, Peru,
SHORT TERM PLANNING Chile, Brazil, South Africa, Australia
and the United Kingdom all offer
Generate schedules for short term planning based on technical support via phone and
cuts or solids in interactive planning modules. A large email.
selection of parameters and flexible configurations let
you control daily, weekly or monthly production. TRAINING

Our software is always improving


in response to our clients’ needs.
It doesn’t take long to fall behind.
That’s why we’re committed to
helping you get the most from our
software. Take advantage of our in-
troductory and advanced courses
or create a customized curriculum
that best suits your needs.

SERVICES

MinePlan Services offers mine


planning studies, mineral resource
studies and project assistance to
help you get the most from your
mine and from MinePlan. From
scoping studies to final feasibility
studies, to MinePlan coach, de-
pend on our multilingual MinePlan
specialists.

vii
Contents
Geostatistics Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Common Geostatistical Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Initializing Sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Data Analysis & Graph Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Compositing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Drillhole Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Composite Capping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Declustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Calculating Variograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Downhole Variograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Variogram Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Variogram Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Model Interpolation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Kriging Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Interpolation Debug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Dynamic Unfolding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Visual Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Graphic Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Point Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Change of Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Model Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Conclusion & Future Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Estimating Resources Using Basic Geostatistics

Geostatistics Data Set


The GEO Data Set comes from a multi-metallic porphyry deposit with copper as the main attribute
of interest. Other attributes, molybdenum and zinc content have also been sampled. The miner-
alization type of the deposit (oxide, primary sulfides and secondary sulfides) most strongly controls
the distribution of grade. The sulfide mineralization consists mainly of pyrite and chalcopyrite. The
deposit occurs in felsic to intermediate intrusive igneous rocks and associated breccias. Alteration
zones outward from the center — from a phyllic zone to a propylitic halo.

Lithology solids (left) and mineralization solids

ALTERATION ZONES
Phyllic 1
Potassic 2
Propylitic 3

LITHOLOGY CODES
Diorite 1
Granodiorite 2
Quartz Feldspar 3
Other* 4
*Intermediate Breccia, Late
Breccia, Country Rock

MINERALOGY CODES
PROJECT BOUNDARY COORDINATES (in metric units)
Oxides 1
Min Max Cell Size
Primary Sulfides 2
Easting: 3500 8500 (DX=25)
Secondary Sulfides 3
Northing: 4500 9500 (DY=25) Other (Outside) 4
Elevation: 705 1965 (DZ=15)

Geostatistics Data Set | 1


©2023 Hexagon

DRILLHOLE DATABASE
The drillhole database consists of 1034 drillholes collected over the course of two drilling cam-
paigns (one on the northwest side of the deposit and the other on the southeast). Drillhole types
include diamond, reverse circulation, hammer, mixed hammer and diamond. Samples were col-
lected at various lengths — from 1-meter to 15-meter intervals. Element sample analysis included
total copper, acid soluble copper, molybdenum and zinc.
Block size has been defined already based on drillhole spacing as 25x25x15 meters. In this class,
you will only use drillholes inside diorite, granodiorite and quartz class to create a model. Geo-
statistical analysis will be performed inside the three different mineralization types. Lithology and
mineralization solids have already been coded inside the drillholes and the model.

BEFORE YOU BEGIN

Before working through the exercises in this workbook, make drillhole views of the assays and overlay
topography. Then open the grid sets and display the drillholes in 2D views against the lithology and
mineralogy solids. Ensure that the drillholes have been coded with lithology and mineralogy codes.

NOTES:

2 | Geostatistics Data Set


Estimating Resources Using Basic Geostatistics

Common Geostatistical Terms


Statistics

Statistics is the body of principles and methods LEARNING OBJECTIVE


for dealing with numerical data. It encompasses
Review some basic statistical concepts
all operations from collection and analysis of the
which are a necessary foundation for the
data to the interpretation and presentation of the
subjects in the remainder of this book.
results. This body of knowledge comes into play
when it is necessary or desirable to form conclu-
sions based on incomplete data. The statistics
make it possible to acquire useful, accurate and operational knowledge from available informa-
tion.

Geostatistics

Geostatistics is the application of statistical methods or the collection and analysis of statistical
data for use in the earth sciences.

Universe

The universe is the source of all possible data. For our purposes, an ore deposit can be defined as
the universe.

Population

“Population” is synonymous with “universe” in that it refers to the total category under considera-
tion. It is the set of all samples of a predetermined universe within which statistical inference is to
be performed. It is possible to have different populations within the same universe based on the
support of the samples. For example, population of blast hole grades versus population of explo-
ration hole samples. Therefore, the sampling unit and its support must be specified in reference to
any population.

Sample

A sample is a subset of a parent population or a part of the universe on which a measurement is


made. This can be a core sample, channel sample, a grab sample etc.

Random Variable

A random variable is a variable whose values are randomly generated according to a probabilistic
mechanism. It may be thought of as a function defined for a sampling process. For example, the
outcome of a coin toss or the future lab analysis result of the grade of a core sample in a diamond
drill hole can be considered as random variables.

Regionalized Variable

A regionalized variable is one outcome of all the random variables within the limits of the defined
universe. As the attributes considered in mining are generated by natural processes, the regional-
ized variables have spatial structure. The grade of ore, thickness of a coal seam, elevation of the
surface of a formation, are examples of regionalized variables. In geostatistics, it is assumed the
existing data is sampled from an unknown regionalized variable. The regionalized variable is used

Common Geostatistical Terms | 3


©2023 Hexagon

to characterize the structured random aspect of the attribute studied consisting of irregular varia-
tions at locations where the attribute is not sampled. The main purpose of the use of regionalized
variables is to express the structural properties of the attributes studied in a framework in which it
can be statistically characterized.
Support
the term “support” refers to the size, shape and orientation of a sample. The support of a sample
can be small or large with respect to the dimension of the deposit. A change in any characteristics
of the support defines a new regionalized variable. For example, the channel samples gathered
from anywhere in a drift will not have the same support as the channel samples cut across the ore
vein in the same drift.
Frequency Distribution
The frequency distribution of an attribute shows the degree of occurrence of the attribute within
a range of values. There are two types of frequency distribution, probability density function (pdf)
and cumulative distribution function (cdf).

Summary Statistics
The general procedure used for describing a set of numerical data is referred to as “reduction of
the data.” This process involves summarizing the data by computing a small number of measures
that characterize the data and describing adequately for the immediate purposes of the analyst.
We use several statistics to describe a distribution. These statistics fall into three categories: mea-
sures of location, measures of variation or spread, and measures of shape. All statistics below refer
to the sample statistics.

MEASURES OF LOCATION
You can locate these measures on the distribution of the data.
Sample Mean
An average of all available samples. Average can be further be weighted to account for different
aspects of the nature of the samples, including preferential sampling, variability in support, and
sample recovery.
Median
The median is the midpoint of the observed values if they are arranged in increasing (or decreas-
ing) order. Therefore, half of the values of the distribution are below the median and half of the
values are above the median. The median can easily be read from a probability plot.
Mode
The mode is the value that occurs most frequently. The mode is easily located on a graph of
a frequency distribution. It is at the peak of the curve, the point of maximum frequency. On a
histogram, the class with the tallest bar can give a quick idea where the mode is.

4 | Common Geostatistical Terms


Estimating Resources Using Basic Geostatistics

Minimum & Maximum


The minimum is the smallest value in the data set. The maximum is the largest value in the data set.
Quartiles
The quartiles split the data into quarters in the same way the median splits the data into halves.
Quartiles are usually denoted by the letter Q. For example, Q1 is the lower or first quartile, Q3 is the
upper or third quartile, etc. As with the median, quartiles can be read from a probability plot.
Deciles, Percentiles and Quantiles
The idea of splitting the data into halves or into quarters can be extended to any fraction. Deciles
split the data into tenths. One-tenth of the data falls below the first or lowest decile. The fifth decile
corresponds to the median. In a similar way, percentiles split the data into hundredths. The 25th
percentile is the same as the first quartile, the 50th percentile is the same as the median and the
75th percentile is the same as the third quartile. Quantiles, on the other hand, is a generic term
that encompasses all fractions. Quantiles are usually denoted by q, such as q.25 and q.75, which
corresponds to lower and upper quartiles, Q1 and Q3, respectively.

MEASURES OF VARIATION
The other measurable characteristic of a set of data is the variation. Measures of variation or
spread are frequently essential to give meaning to averages. There are several such measures.
The most important ones are variance and standard deviation.
Sample Variance
The sample variance describes the variability of the data values. The variance is the average of
the squared difference between sampled values and the sample mean. As in the case of the
average, the sample variance can be weighted or non-weighted. Since the sample variance
involves the squared differences, it is sensitive to outlier values.
Standard Deviation
The standard deviation is the square root of the variance . It is often used instead of the variance
since its units are the same as the units of the attribute studied.

MEASURES OF SHAPE
In addition to measures of central tendency and variation, there are other measurable character-
istics of a set of data that may describe the shape of the distribution.
Skewness
The skewness is often calculated to determine if a distribution is symmetric. The direction of skew-
ness is defined as in the direction of the longer tail of the distribution. If the distribution tails to the
left, it is called negatively skewed, if it tails to the right, it is called positively skewed.
Peakedness
The peakedness or kurtosis is often calculated to determine the degree to which the distribution
curve tends to be pointed or peaked.

Common Geostatistical Terms | 5


©2023 Hexagon

Coefficient of Variation
The coefficient of variation is the measure of relative variation. It is the standard deviation divided
by the mean and does not have a unit. Therefore, it can be used to compare the relative disper-
sion of values around the mean, among distributions described in different units. It may also be
used to compare distributions that, though measured in the same units, are of such difference in
absolute magnitude that comparison of the variability by the use of measures of absolute variation
is not meaningful.
If estimation is the final goal of a study, the coefficient of variation can provide some warning of
upcoming problems. A coefficient of variation greater than 1 indicates the presence of some
erratic high sample values or the presence of trends in the mean and variance, which may have
significant impact on the final estimates.

BIVARIATE STATISTICS
Sample Covariance
The variability between two sample variables is described by covariance.
Correlation Coefficient
Covariance is affected by the magnitude of the data values. Therefore, sometimes we use correla-
tion coefficient instead, which is like a normalized covariance and it will remove the magnitude of
the data values. The correlation coefficient ranges from -1 to 1, indicating high direct and inverse
correlation for values of 1 and -1, respectively, and 0 indicating no correlation.

THEORETICAL MODEL OF DISTRIBUTIONS


The normal and the lognormal distribution are commonly used in statistical analysis.
Normal Distribution
The normal distribution is the most common theoretical probability distribution used in statistics. It
is also referred as the Gaussian distribution. The normal distribution curve is bell-shaped.
Lognormal Distribution
The lognormal distribution occurs when the logarithm of a random variable has a normal distribu-
tion.

NOTES:

6 | Common Geostatistical Terms


Estimating Resources Using Basic Geostatistics

Initializing Sigma
Sigma offers a package of statistical and geosta-
tistical tools. It is a stand-alone application, so it LEARNING OBJECTIVE
can run outside of MinePlan 3D (MP3D) or it can
be called from MP3D in the MinePlan Menu. Connect to a data source for statistical
and geostatistical analysis.
Sigma Projects
Creating a Sigma project creates a set of sub-
folders in the location you choose to start your
project. The main subfolder _sigmaresources will
store all graphs and necessary supporting files cre-
ated in the use of the tool.
The first time you use the tool, you will be asked if
you want to recreate the project. If you have al-
ready created a project, you will not be prompted.

EXERCISE: Initialize a New Sigma Project


Create a new Sigma project in your project folder. After starting Sigma, you can create a data source for
graph creation with any desired filters. You can also set up quick graphs via the simple single variable type
or via the matrix option for more complex domaining of the data. Choose a setup that will provide you the
information you are interested in and close the data source panel. If you set up a lot of quick graphs the
process of building them will take some time.

Start Sigma MinePlan Menu → Sigma → Select project directory

Setup a data source Setup → New data source → Filters → Quick graphs
and quick graphs

Data Sources
Before you can use Sigma to create graphs, you must first connect to a data source. Once the
connection is made, you can run any number of Sigma graphs without reconnecting to the data
source. Sigma can manage many data sources at one time for your use.
There are many types of data sources to choose from. There are three types of drillholes sources,
four types of block model connections, and a general CSV connection. Sigma supports Torque
data and the older MinePlan files 09, 11, and 12. It can attach to block models that are normal,
sub-blocked, single ore percent, or multiple ore percent.
Creating a data source is basically a 3-step process; select your data source, general data source
setup, and quick graph setup. The first two steps are critical and the last is optional. To create your
data source, press the “PLUS” button and choose a type. Give you data source a name and then
select the appropriate model, drillhole, or ASCII data you wish to explore. To finalize, press “Set
Connection.”

Initializing Sigma | 7
©2023 Hexagon

General data source setup is where you can choose the items you wish to review and apply filters.
Start by choosing categorical and continuous variables of interest and setting up filters on the
second tab. The more limited your data source, the faster the creation of graphs will be. On the
overview tab, you will press “Process Data” to see the metadata for your chosen variables and
move on to the quick graphs step.
The final step is setting up the quick plots for the data. This can be done either with “simple” one
variable groupings or “matrix” style, where domains incorporate two or more categorical vari-
ables. Choose a few quick graphs from each style. You will be prompted for the global continuous
variables you wish to analyze for each category you choose. You may modify this for each bin if
you choose.

EXERCISE: Connect to a Data Source and Setup Quick Graphs

Connect to Data Press “+” and chose a data source type → Give it a name → Choose variables
→ Choose filters → Process Data

Setup Quick Graphs Setup quick graphs “simple” → Select categorical variables → Select continu-
ous variables

Setup More Quick Setup quick graphs “matrix” → Select categorical variables for rows and
Graphs columns → Select continuous variables

NOTES:

8 | Initializing Sigma
Estimating Resources Using Basic Geostatistics

Data Analysis & Graph Displays


It is essential to organize and analyze statistical
data to understand its characteristics, and to see it LEARNING OBJECTIVE
clearly. Therefore, much of statistics deals with the
organization, presentation and summary of data. Prepare and verify your data. You must
The organization of data includes its preparation also be able to analyze, summarize, un-
and verification. derstand and create basic charts (such
as histograms, scatterplots, QQ plots and
ERROR CHECKING
customs plots).
In drill hole assay data preparation, the initial drill
hole logs must be coded carefully and legibly to
prevent any future errors. One should not use a zero or blank to indicate the missing data. It is
preferable to use a specific negative value, such as -1 or -999, for such data. Otherwise, the missing
data may end up being used in estimation as part of the actual data. Other helpful suggestions
to verify the data for accuracy:
• Sort the data and examine the extreme values. Try to establish their authenticity by referring
to the original sampling logs.
• Plot sections and plan maps for visual verification and spotting the coordinate errors. Are
they plotting within the expected limits?
• Locate the extreme values on a map. Are they located along trends of similar data values or
are they isolated? Be suspicious of isolated extremes. If necessary and possible, get duplicate
samples.
Time spent for verification of the data is often rewarded by a quick recognition when an error has
occurred!

Frequency Distributions and Histograms


One of the most common and useful presentations of data sets is the frequency table and the
corresponding graph, the histogram. A frequency table records how often observed values fall
within certain intervals or classes. The mean and the standard deviation of the values within these
intervals are also reported.
The histogram is the graphical representation of the same information on a frequency table. Sum-
mary statistics are customarily included in the histogram to complete the preliminary information
needed to study the sample data.
Frequency tables and histograms are useful in ore reserve analysis for many reasons:
1. They give a visual picture of the data and how they are distributed.
2. Bimodal distributions show up easily, which usually indicates mixing of two separate popula-
tions.
3. Outlier high grades can be easily spotted.

Data Analysis & Graph Displays | 9


©2023 Hexagon

EXERCISE:Create Histograms
Generate three histograms for total copper by mineralogy code. The bin width will be 0.1 with 15 bins. After
creating the three histograms, use the Overlay function to add them to the same chart.

Build a Histogram Statistics panel → Histogram or Batch Create → Graph Options

Combine Histograms Select histograms in repository → Statistics Panel → Overlay

View Data Statistics Overlay view → Statistics below graph → Statistics → Summary Statistics

View Cumulative Histogram view → Right click and view Cumulative Frequency Curve
Frequency

The Filter tab appears throughout Sigma. This tab allows you to set up multiple custom filters on data (e.g.
histogram of copper only where lithology = 2). To use a filter, open the Filter tab of the graph options,
click the + button and then choose your filter options.

10 | Data Analysis & Graph Displays


Estimating Resources Using Basic Geostatistics

Cumulative Probability Plots


Probability plots are useful in determining how close the distribution of sample data is to being
normal or lognormal. Probability plots can be used for detecting the presence of multiple pop-
ulations. Although the deviations from the straight line on the plots do not necessarily indicate
multiple populations, they represent changes in the characteristics of the cumulative frequencies
over different intervals. It is always a good idea to find out the reasons for such deviations.

EXERCISE: Create a Cumulative Probability Plot (CPP)


Generate a plot showing the probabillity of copper grades. The grade is in the y-axis, and the probability
value is in the x-axis. The minimum grade is 0 and the maximum grade is 2.

Build a CPP Quick Graphs or Statistics panel → CPP → Graph Options

Experiment with Filters Use the Data Filters tab → “+” icon → Variable “Mind Code”, OP “=”, Value
“1”, “2”, or “3”. Click Update. Generate a different graph for each value.

Data Analysis & Graph Displays | 11


©2023 Hexagon

Box Plots
A box plot (also known as a box and whisker diagram) is a convenient way of graphically depicting
groups of numerical data through their quartiles. It displays the distribution of data based on the
five number summary: lowest non outlier, first quartile, median, third quartile, and highest non
outlier. Outliers maybe plotted as individual points.

EXERCISE: Create Box Plots


Create box plots for total copper separated by mineralogy type (using the default +4 quantiles). To do this,
create three boxes for the Total Copper item (in the boxes tab) named after the mineralogy code value

Build a Box Plot Statistics panel → Box Plot → Graph Options

Analyze Analyze Plot → Graph Statistics (visual and numerical)

12 | Data Analysis & Graph Displays


Estimating Resources Using Basic Geostatistics

Scatter Plots
One of the most common and useful presentations of bivariate data sets is the scatter plot. A
scatter plot is an x-y graph of the data on which the x-coordinate corresponds to the value of one
variable, and the y-coordinate to the value of another variable.
A scatter plot is used to determine if two variables are related or if there are unusual data pairs. In
the early stages of studying spatially continuous data set, it is necessary to check and clean the
data. Even after the data have been cleaned, a few erratic values may have a major impact on
estimation. The scatter plot can be used to help both in the validation of the initial data and in the
understanding of later results.

EXERCISE: Create a Scatter Plot


Create a scatter plot of total copper versus moly. Copper ranges from 0–0.5 and moly from 0–0.05. Enable
the overflow categories to group points together, and use a condition expectation line.

Build a Scatter Plot Statistics panel → Scatter Plot → Graph Options

Toggle Frequency Map Style Tab → Enhancements → “Show as Frequency Map”

Data Analysis & Graph Displays | 13


©2023 Hexagon

Quantile-Quantile Plots
Two distributions can be compared by plotting their quantiles against one another. The resultant
plot is called quantile-quantile, or simply q-q plot. If the q-q plot appears as a straight line, the two
marginal distributions have the same shape. A 45 degree line indicates further that their mean
and variances are also the same.

EXERCISE: Create QQ Plots


Create a Quantile-Quantile graph comparing MOI and CUID from the stat15.dat block model. Create one
for each MINER type in the Data Filters area.

Build a QQ Plot Statistics Panel → QQ Plot → Input Variable X “CUID”, Y “MOI”, apply matching
filters for each axis in the Data Filters tab using the MINER item. Create a new
QQ plot for each MINER number.

14 | Data Analysis & Graph Displays


Estimating Resources Using Basic Geostatistics

Contact Plots
A contact plot is a plot of mean grades as a function of signed distance below a contact of a
given type. Typically, a contact plot is used to determine whether the grade transitions smoothly
across the contact type, or whether this is a discontinuity. Examples of discontinuity and transition
zones, respectively, are shown.

EXERCISE: Create Contact Plots


Create a contact plot for the Assay drillhole coverage comparing Total Coverage across the several Lithology
types.

Build a Contact Plot Geostatistics Panel → Contact Plot → Input Variable “Total Copper”, Contact
variable Lith Code, Contact Values 1,2,3,4, step size 2.00, max distance 30 →
Apply

Explore Contact Contacts Selection → Change contact combinations, investigate chart


Relationships

Change Styles Style tab → Enhancements → Add Bin Average points, Histogram Bars

Data Analysis & Graph Displays | 15


©2023 Hexagon

Pivot Report
Pivot Reports are convenient to prepare tables for various reports. You can customize the format
in Sigma, and cut and paste into software like Microsoft Excel and Microsoft Word. The following
procedure demonstrates how to generate a basic univariate custom report. Univariate describes
an expression, equation, function or polynomial of only one variable. It is also termed a one-way
sensitivity analysis.

EXERCISE: Create a Pivot Report


Create a Pivot Report for Total Copper drillhole assay values. Check the coefficient of variation. Is it too high?
Filter out some of the high values. Does the coefficient of variation change (decrease)?

Create a Pivot Report Statistics → Pivot Report → Input Variable Total Copper → Row or Column
Variable Min Code. Select all 4 min code values.

Filter High Values Data Filters tab → Total Copper → Choose a value to filter out and click Up-
date. Investigate how the coefficient of variation changes.

16 | Data Analysis & Graph Displays


Estimating Resources Using Basic Geostatistics

Compositing
A composite is the weighted average of a set of
samples that fall within a defined boundary. The LEARNING OBJECTIVE
weighting factor is usually the sample length, but it
may also include sample specific gravity or other Composite drillhole data.
parameters.
Use composites, instead of samples, in the interpolation of the deposit model to provide a mining
basis for modeling, reduce the amount of data used and provide uniform support for geostatistics.
The composite length is a function of the variability of the data, which is a characteristic of the ge-
ology of the deposit. You can also add geologic codes through the drillhole view coding options
in MP3D, or by overlaying codes from other Torque coverages. Those composites are then ready
for interpolation directly in MinePlan.
MinePlan Torque offers a number of compositing interval methods: bench, seam, fixed length,
honor sample attribute, composite entire sample site, economic and samples to composites ap-
proach.

EXERCISE: Compare Assay Lengths


Make a histogram of lengths. Note that most assays are at 3m and 15m intervals (so we will make them all
15m).

Compositing | 17
©2023 Hexagon

Composites (left of trace) vs. Assays (right of trace)

EXERCISE: Create Composites


In MinePlan Torque, composite copper and molybdenum using fixed length and honoring mineralogy code.
Transfer the lithology code by majority.

View the composites in MinePlan 3D (MP3D). Add strips and compare to assay strips.

Run statistics in Sigma and compare to the assay statistics. Use Probability plots per mineralogy to compare.

18 | Compositing
Estimating Resources Using Basic Geostatistics

Drillhole Spacing
Before building a model, it is important to ana-
lyze drillhole spacing to find an appropriate lag dis- LEARNING OBJECTIVE
tance for variogram calculation. Distances around
the drillhole spacing can be used as lag distances. Find an appropriate lag distance for vari-
ogram calculations by analyzing drillhole
spacing.

EXERCISE: Find Lag Distance


Run the procedure p52201.dat on your composites. This will output a CSV file which you can then process
via Sigma. You may want to separate runs per mineralogy codes. Alternatively, you can store the nearest
distance to each composite.

Open Compass Go to MinePlan → Compass → File → Open → stat.prj. Reset your Torque
database in the Setup tab.

Run procedure Run procedure p52201.dat, “Analyze Drillhole Spacing”. Choose Torque com-
posite input, sample site type Drillholes, Composite Set: Fixed. On area selec-
tion choose benches 26 to 63. Label Item = Total Copper, Search distance
on easting and northing = 100, on elevations 7.5 (half bench height). Exper-
iment with different Min Code filters in the Optional Data Selection for DH
Spacing Analysis page.

Drillhole Spacing | 19
©2023 Hexagon

EXERCISE: Load Drillhole Spacing Results to Sigma


Take the dat522.csv output by the drillhole spacing procedure and set it as a data source to your Sigma
project. Run a scatterplot to investigate average drillhole spacing per bench.

Set Up as Sigma Data Use the Setup button in the Sigma Home tab. Use the “Create new data
Source source” button to select the CSV option and navigate to the dat522.csv.
Select the appropriate header row.

Create Scatterplot Set input variable X as Dist Average, and Y as bench.

NOTES

20 | Drillhole Spacing
Estimating Resources Using Basic Geostatistics

Composite Capping
Sometimes it is necessary to restrict the influence of
outliers when building a resource model. You may LEARNING OBJECTIVE
want to consider capping composite values based
on probability plots or cut off analyses. Cap drillhole values to reduce the impact
of outliers.

Composite Capping | 21
©2023 Hexagon

EXERCISE: Analyze Outliers


Check the composite probability plots in Sigma. Are there any interesting features in the right tail
of the distribution? Would you cap this data? If so, at what value would you apply a cap?

Consider doing a sensitivity analysis on your cap value. Compare what happens to the mean
grade if you cap the data at different values. In Sigma, run box plots with different maximum val-
ues. Study the mean values and other stats after each run. Compare the cap value you chose in
previous exercise with the sensitivity plot.

EXERCISE: Cap Total Copper Composites


Use Calculated Attributes in Torque to cap your total copper items into a new attribute.
Open Torque Data Setup Create a new Calculated Sample Attribute and use the Script section to set
a new Cu max value with a conditional statement. Rerun your composites
and include the new capped attribute for analysis in Sigma. Generate Pivot
Charts similar to the earlier exercise on the capped grades and investigate
the difference.

22 | Composite Capping
Estimating Resources Using Basic Geostatistics

Composite Capping | 23
©2023 Hexagon

Declustering
Preferential drilling is often experienced in higher-
grade areas, which may result in clusters of sam- LEARNING OBJECTIVE
ples biased toward high grade values. If clustering
occurs at these high-grade areas, you will overes- Decluster your domains using the Cell
timate the mean; declustering gives the unbiased Decluster method in Sigma.
summary statistics of the population. This is an im-
portant step in resource modeling, as the rest of the
inference of the domain will be based on this correction of sampling bias. The declustered vari-
ance defines the sill of the variogram more appropriately than the raw variance, thus, affecting
how the range of a variogram is calculated. The method implemented in Sigma is called Cell
Declustering. It overlays cells (cubes) over a domain and assigns weights to the samples inversely
related to the number of samples in each cell.
The cell sizes are set in the X direction; the Y and Z directions are taken care of using an Anisotropy
ratio between those directions and X (by default the ratio is 1:1). The minimum cell size is usually
the closest distance in your drilling space and the maximum size is set to no more than half the
distance of the domain. The grid origin placement is by default generated at 25 different points,
Sigma calculates the average weight and displays it in the graph. For validation purposes, the
statistics from declustered composite values can be compared to the 3D block model statistics.

EXERCISE: Decluster the Data in Sigma


Investigate how cell clustering effects the overall average of your copper data in your Composite values.
Filter out the Min = 4 host rock. Notice how the global copper average drops off with declustering applied,
as high grade areas with high drillhole density are inversely weighted. Very small cell values will match the
non-declustered mean, as each cell will at maximum contain a single drillhole, and very large cell values will
start approaching the declustered mean, as very large cell sizes will start containing all samples.

Create Cell Geostatistica tab → Cell Declustering → Choose Copper Composites data
Declustering Cart source → Grid Parameters: Min X cell size 0.5, max 5,000, number of cell sizes
100. In data filter, put Min Code != 4 to only include mineralized rock.

Export Cell Size Use the Export Cell Size option to create a weighted cell size file you can use
in other Sigma charts. Choose a cell size in the dropdown menu, click ex-
port, and save the file. Create two histograms of Cu values, one using the
“Decluster” option and select the decluster file just exported. Compare the
differences.

24 | Declustering
Estimating Resources Using Basic Geostatistics

Declustering | 25
©2023 Hexagon

Calculating Variograms
Several geostatistical tools — such as correlations,
covariance and variograms — describe the spatial LEARNING OBJECTIVE
continuity in an ore deposit. All of these tools use
summary statistics to describe how spatial continu- Build and view variogram maps to ana-
ity changes as a function of distance and direc- lyze the spatial continuity in an ore de-
tion. posit for ore reserve estimation.
The variables in earth sciences represent some sim-
ilarity (or dissimilarity) that exists between the value of a sample at one point and the value of an-
other sample some distance away. This expected variation can be called the spatial similarity or
spatial correlation. Variograms provide a means to measure the similarity or correlations of sample
values within a deposit, or rather within a homogeneous area of the deposit in which it is assumed
the geological relationships are the same or similar.
In simplest terms, a variogram measures the spatial correlation between samples. One possible
way to measure this correlation between two samples at point xi and xi + h taken h distance apart
is the function:
f1 (h) = 1/n ∑[Z(xi ) − z(xi + h)]

In this function, z(xi) refers to the assay value of the sample at point xi and h is the distance be-
tween samples. Thus, the function measures the average difference between samples h distance
apart. Although this function is useful, in many cases it may be equal or close to zero because the
differences cancel out. A more useful function is obtained by squaring the differences:

f2 (h) = 1/n ∑[Z(xi ) − Z(xi + h)]2

In this function, the differences do not cancel each other out, and the result will always be positive.
The second function was the variogram originally denoted as 2y(h). This was the variogram func-
tion originally denoted as 2γ(h). However, popular usage refers to semi-variogram γ(h) as being
the variogram. Therefore, throughout this chapter, variogram will refer to the following function:

γ(h) = 1/(2n) ∑[z(Xi ) − Z(xi + h)]2 i = 1, . . . , n

Note that γ(h) is a vector function in three-dimensional space, and it varies with both distance
and direction. The number of samples, n, is dependent on the distance and direction selected to
accept the data.
Variograms will eventually help determine search parameters to apply in the model interpolation.
In the case of kriging, they will dictate the weighting of the composites falling inside the search for
each location of estimation.

26 | Calculating Variograms
Estimating Resources Using Basic Geostatistics

Variograms in Sigma
Sigma offers six types of variograms:

Calculating Variograms | 27
©2023 Hexagon

Traditional:

That is the standard variogram formula.

Traditional Standardized:

This is the same as a traditional variogram but the results are all divided by the maximum variability
of the data. This enforces a total sill of 1 on the data.

Madogram:

This is just like the traditional variogram except the absolute difference of the function is used
instead of the squared value. This can help correct for high outlier influence.

γ(h) = 1/(2n) ∑[z(Xi ) − Z(xi + h)]i = 1, . . . , n

Correlogram:

By definition, the correlation function ρ(h) is the covariance function standardized by the appro-
priate standard deviations (so in a sense it is like a normalized covariance).

ρ(h) = C(h)/(σ−h ∗ σ+h )

where σ−h is the standard deviation of all the data values whose locations are -h away from some
other data location:
σ2−h = 1/NΣ(v2i − m2−h )

σ+h is the standard deviation of all the data values whose locations are +h away from some other
data location:

σ2+h = 1/NΣ(v2j − m2+h )

The shape of the correlation function is similar to covariance function. Therefore, it needs to be
inverted to give a variogram type of curve, which we call correlogram. Since the correlation func-
tion is equal to 1 when h=0, the value obtained at each lag for correlation function is subtracted
from 1 to give the correlogram.

γ(h) = 1 − ρ(h)

Pairwise Relative and Relative variograms (local mean):

These types of variograms are used to account for varying means. Pairwise Relative Variograms
and Local Relative Variograms scale the original variogram to some local mean value. This serves
to reduce the influence of very large values.

A relative variogram is obtained from the ordinary variogram by simply dividing each point on the
variogram by the square of the mean of all the data used to calculate the variogram value at
that lag distance. Pairwise relative variogram also adjusts the variogram calculation by a squared
mean. This adjustment, however, is done separately for each pair of sample values, using the
average of the two values as the local mean.

28 | Calculating Variograms
Estimating Resources Using Basic Geostatistics

γR (h) = γ(h)/[m(h) + c]2

where c is a constant parameter used in the case of a three-parameter lognormal distribution.


Pairwise Relative Variogram:

γPR (h) = 1/(2n)Σ[(vi − v j )2 /((vi + v j )/2)2 ]

where vi and v j are the values of a pair of samples at locations i and j, respectively.
The reason behind the computation of a relative variogram is an implicit assumption that the assay
values display proportional effect. In this situation, the relative variogram tends to be stationary.
If the relationship between the local mean and the standard deviation is something other than
linear, one should consider scaling the variograms by some function other than mean.
Indicator:
This variogram is calculated from data that has been coded (transformed into zero and ones)
using a series of indicator cutoffs or thresholds. They can be used to estimate the proportion of
different populations in a particular area.
At each point x in the deposit, consider the following indicator function of zc defined as:

i(x; zc ) = 1, i f z(x) ≤ zc

otherwise i(x; zc ) = 0,
where: x is location, zc is a specified cutoff value, z(x) is the value at location x.
Indicator Standardized:
This is the same as an indicator variogram set but the total sill is normalized to 1.

Calculating Variograms | 29
©2023 Hexagon

Global Variograms
Global variograms will give you an idea of the average of various directional variograms. They will
also give you an indication of spatial structure. The omnidirectional variogram is the “best” vari-
ogram you will see from your data, but it is not good enough to characterize directional continuity
that is true to your deposit.
It is not a strict average since the sample locations may cause certain directions to be over rep-
resented. For example, if there are more east-west pairs than north-south pairs, then the omni-
directional variogram will be influenced more by east-west pairs.
The calculation of the omni-directional variogram does not imply a belief that the spatial continu-
ity is the same in all directions. It merely serves as a useful starting point for establishing some of
the parameters required for sample variogram calculation. Since direction does not play a role
in omni-directional variogram calculations, one can concentrate on finding the distance param-
eters that produce the clearest structure. An appropriate class size or lag can usually be chosen
after few trials.
Another reason for beginning with omni-directional calculations is that they can serve as an early
warning for erratic directional variograms. Since the omni-directional variogram contains more
sample pairs than any directional variogram, it is more likely to show a clearly interpretable struc-
ture. If the omni-directional variogram does produce a clear structure, it is very unlikely for the
directional variograms to show a clear structure.

Calculate an Omnidirectional Variogram


Calculate and plot an omnidirectional variogram of Total Copper — standardized to a sill of one.

To prepare an omnidirectional variogram, go to the geostatistics panel and choose to create a


new global variogram. Sigma will automatically set most things for you. Choose Total Copper and
build.

BASIC VARIOGRAM TERMINOLOGY


The basic terminology used to describe the features of the variogram is given below:
Sill:
The value of the variogram where it reaches a plateau, or levels off is called the sill. A variogram
should likely have a sill approximately equal to the variance of the data.
Range:
The samples that are close to each other have generally similar values. As the separation dis-
tance between samples increases, the difference between the sample values, and hence the
corresponding variogram value will also generally increase. Eventually, however, an increase in
the separation distance no longer causes a corresponding increase in the variogram value. Thus,
the variogram reaches a plateau, or its sill value. The distance at which the variogram reaches
the sill is called the range. The range is simply the traditional geologic notion of zone or range of
influence. It means that beyond the range, the samples are no longer correlated. In other words,
they are independent of each other.

30 | Calculating Variograms
Estimating Resources Using Basic Geostatistics

Nugget Effect:
The nugget effect is a combination of:
• Short-scale variability that occurs at a scale smaller than the closest sample spacing
• Sampling error due to the way that samples are collected, prepared and analyzed The ratio
of the nugget effect to the sill is often referred to as the relative nugget effect and is usually
quoted in percentages.

Calculating Variograms | 31
©2023 Hexagon

Downhole Variograms
Downhole variograms are useful for studying vari-
ability at a very small distance. This information LEARNING OBJECTIVE
can be used for the directional variograms model-
ing process. It can help you determine the nugget Calculate downhole variograms and esti-
value that may not be apparent when calculating mate nugget effect.
the directional variograms.

EXERCISE: Calculate a Downhole Variogram


In Sigma, calculate a combined downhole variogram for total copper separately in all three mineralization
zones. Then open the graphs and model them.

Create Downhole Geostatistics tab → Downhole Variogram → Data Source Composites → Input
Variogram Variable Total Copper → Variogram type Traditional Standardized. In Lag
Parameters, use Type: Lag Distance, Lag Distance: 15, Number of Lags: 20,
and Lag Tolerance: 7.00. Create one for all Total Copper values, then create
one for each Mineralization type using the Data Filters tab.

Model Downhole Geostatistics tab → Variogram Fit → Add one of the Downhole Variograms.
Variogram Click Autofit. Investigate the calculate sill and nugget effect.

32 | Downhole Variograms
Estimating Resources Using Basic Geostatistics

Variogram Maps
The deposition of metals in ore bodies typically
follows three-dimensional structures. In mining, LEARNING OBJECTIVE
variogram maps are used to study these struc-
tures based on features captured along two- Learn to create variogram maps for zones
dimensional planes in the variogram space. To of interest.
have a better picture of the three-dimensional
structure, variogram maps are calculated along orthogonal planes. These variogram maps are
rotated in different configurations, with respect to the system origin, to find the main directions of
continuity.
Along with neutral models, variogram maps are calculated to explore general characteristics of
domains, including the presence of mean trends, the presence of sub-structures, and general spa-
tial continuity. These characteristics define whether the domain has a stationary behavior or not.
In case the domain presents non-stationary features, it is recommended to pre-process the data
source and re-calculate the variogram maps. For example, in the case of strong presence of a
mean trend, it is recommended to remove it from the data source and re-calculate the variogram
maps based on the residual values. Then, the domain needs to be estimated or simulated with the
residual values and add the trend later.
There are different metrics, other than the variogram, that is used to quantify the spatial conti-
nuity of the metal content attribute, including pairwise variogram, correlogram, and madogram.
However, these other metrics tend to overestimate the spatial continuity and their use does not
permit a thorough validation of the data source. These alternatives metrics are typically used in
preliminary studies when it is difficult to estimate the spatial continuity of the domain using the var-
iogram. In Sigma, for practicality, even when the metric used is not the variogram, the maps are
still referred to as variogram maps.
Variogram maps are simple to create and use. You will first create a variogram map based on
and appropriate filter set. You will then use automatic fitting to rotate the variogram map to a
reasonably correct direction. You can then adjust the direction to get a better fit. Finally, you will
export the variogram map to 3 principle variograms for the major, minor, and vertical directions.
These will be fitted with a variogram model and exported for use in interpolation.

EXERCISE: Create variogram maps for zones of interest


Create and investigate variogram maps for each of the mineralized zones. Use the “Compute projection
angles” option to orient the ellipse in the continuity directions. Use the Variogram Fit tool to fit a model to the
3 perpendicular directions, and export the ellipse shape an object to view in MP3D.

Variogram Maps | 33
©2023 Hexagon

Create Variogram Map Geostatistics tab → Variogram Map. Use Torque Composites as the Data
source → Input Variable: Total Copper. Use the data filter tab to limit to
the separate mineralogy bodies. Once generated, click the “compute pro-
jection angles” link to calculate directions of least variance.

Run the Variogram Once the map is created, select the “Variogram Fitting Tool” button in the Pa-
Fitting Tool rameters tab of the variogram map. Name it after the filtered mineralogy of
the Variogram Map.

Fit the Variograms Click the “Update” tool to make the model active. You can then edit the
model by clicking on the red and black icons on the best fit line. Note that
the sill in all directions must be the same, however the range, or how quickly
the line reaches the sill, is adjustable for each direction. Use the “prorate”
button next to the Struct columns to automatically make the contribution
add to the sill.

34 | Variogram Maps
Estimating Resources Using Basic Geostatistics

Variogram Models
Practical usage of the experimental variogram re-
quires a description of the variogram by a math- LEARNING OBJECTIVE
ematical function or a model. Many models can
describe experimental variograms; however, some Model variograms in Sigma and view
models are more commonly used than others. model variograms in Sigma and MinePlan
3D.

Spherical Model Exponential Model

Gaussian Model Linear Model

Variogram Models | 35
©2023 Hexagon

Spherical Models:
This is the most commonly used model to describe a variogram. The definition is given by:

γ(h) = c0 + c[1.5(h/a) − 0.5(h3 /a3 )] i f h < a

γ(h) = c0 + c i f h ≥ a

In this equation, c0 refers to the nugget effect, a refers to the range of the variogram, h is the
distance and c0 + c is the sill of the variogram. The spherical model has a linear behavior at small
separation distances near the origin, but flattens out at larger distances and reaches the sill at a,
the range. It should be noted that the tangent at the origin reaches the sill at about two-thirds of
the range.
Linear Models:
This is the simplest of the models. The equation of this model is as follows:

γ(h) = c0 + A(h)

In this equation, c0 is the nugget effect and A is the slope of the variogram.
Exponential Models:
This model is defined by a parameter a (effective range 3a). The equation of the exponential
model is:
γ(h) = c0 + c[1 − exp(−h/a)]h > 0

This model reaches the sill asymptotically. Like the spherical model, the exponential model is linear
at very short distances near the origin. However, it rises more steeply and then flattens out more
gradually. It should be noted that the tangent at the origin reaches the sill at about two-fifths of
the range.
Gaussian Models

This model is defined by a parameter a, (effective range a 3). The equation of the Gaussian model
is given by:
y(h) = c0 + c[1 − exp(−h2 /a2 )]h > 0

Like the exponential model, this model reaches the sill asymptotically. The distinguishing feature of
the Gaussian model is its parabolic behavior near the origin.
Nested Structures
A variogram function can often be modeled by combining several variogram functions:

γ(h) = γ1 (h) + γ2 (h) + ... + γn (h)

For example, there might be two structures displayed by a variogram. The first structure may de-
scribe the correlation on a short scale. The second structure may describe the correlation on a
much larger scale. These two structures can be defined using a nested variogram model. In us-
ing nested models, one is not limited to combining models of the same shape. Often the sample
variogram will require a combination of different basic models. For example, one may combine

36 | Variogram Models
Estimating Resources Using Basic Geostatistics

spherical and exponential models to handle a slow rising sample variogram that reaches the sill
asymptotically.

EXERCISE: Model Three Principal Directions


Using a previously calculated variogram map, model the three principal directions simultaneously by picking
a common nugget effect (use the nugget from the downhole variograms), sill contribution and corresponding
variogram structure. You are fitting geometric anisotropy. This is a trial-and-error exercise, so you may have to
use more than one variogram structure to fit all directions. You may need to change the lag distance to get
better short-range models, since the lag distances used by the variogram map may be too large. When you
are satisfied, click “Export” to create a .var file to be used later in the Model Interpolation Tool.

Variogram Models | 37
©2023 Hexagon

EXERCISE: Display Variogram Parameters


Click the Export button and save it in your project directory. In MP3D, right click in the Data Manager →
Import → Variogram file. Choose coordinates to place it at and view in the viewer.

Sectional view

3D view

38 | Variogram Models
Estimating Resources Using Basic Geostatistics

Interpolation Techniques
Interpolating the model is the only way to transfer
the composite grades or qualities into the 3D block LEARNING OBJECTIVE
model (3DBM). Different types of interpolation rou-
tines are available in MinePlan. This course will Interpolate a 3D block model using in-
cover inverse distance weighting (IDW) and krig- verse distance weighting and kriging.
ing.
Ordinary kriging is designed primarily for the local estimation of block grades as a linear combina-
tion of the available data in or near the block, such that the estimate is unbiased and has minimum
variance. Ordinary kriging is linear because its estimates are weighted linear combinations of the
available data; it is unbiased; it is best because it aims to minimize the variance of errors.
The conventional estimation methods, such as the inverse distance weighting, are also linear and
theoretically unbiased. The distinguishing feature of ordinary kriging from conventional linear esti-
mation methods is its aim of minimizing the error variance.
INVERSE DISTANCE WEIGHTING
Inverse distance weighting(IDW) is a common estimation method. Each sample weight is inversely
proportional to the distance between the sample and the point being estimated. The equation is
as follows:
p p
z∗ = [∑(1/di )z(xi )]/ ∑(1/di )i = 1, . . . , n

In this equation z* is the estimate of the grade of a block or a point, z(xi ) refers to sample grade, p
is an arbitrary exponent, and n is the number of samples.

Inverse Distance Squared

(1/1002 )∗1.1+(1/1002 )∗0.5 (1/1002 )∗1.1+(1/502 )∗0.5 (1/1002 )∗1.1+(1/1002 )∗0.5


Est = (1/1002 +1/1002 )
Est = (1/1002 +1/502 )
Est = (1/1002 +1/1002 )

Est = 0.8 Est = 0.62 Est = 0.8


Grades Weights Grades Weights Grades Weights
0.5 0.5 0.5 0.8 0.5 0.5
1.1 0.5 1.1 0.2 1.1 0.5

Equal distances produce Different distances produce Equal distances produce


same weights. different weights. same weights regardless of
relative location of samples.

Interpolation Techniques | 39
©2023 Hexagon

KRIGING ESTIMATOR
The kriging estimator is a linear estimator of the following form:

z∗ = ∑ λi z(xi ) i = 1, . . . , n

In this equation, z∗ is the estimate of the grade of a block or a point, z(xi ) refers to sample grade,
λi is the corresponding weight assigned to z(xi ), and n is the number of samples. The weighting
process of kriging is equivalent to solving a constrained optimization problem where the objective
function is to minimize the error σ2 = F(λ1 , λ2 , λ3 ..., λn ) and subject to Σλi i = 1 in the case of ordinary
kriging.
This constraint optimization problem can be readily solved by using Lagrange multipliers.
KRIGING SYSTEM
Ordinary kriging can be performed for estimation of a point or a block. The linear system of equa-
tions for both cases is similar.
POINT KRIGING
The point kriging system of equations in matrix form can be written in the following form:

C ∗ λ = D

     
C11 ··· C1n 1 λ1 C10
 . .. .. ..  .  . 
 .. . . .  ∗  .  =  .. 
  .  
 
···
     
Cn1 Cnn 1 λn  Cn0 
1 ··· 1 0 µ 1

The matrix C consists of the covariance values Ci j between the random variables Vi and V j at
the sample locations. The vector D consists of the covariance values Ci0 between the random
variables Vi at the sample locations and the random variable V0 at the location where an estimate
is needed. The vector λ consists of the kriging weights and the Lagrange multiplier. It should be
noted that the random variables Vi , V j , and V0 are the models of the phenomenon under study.

40 | Interpolation Techniques
Estimating Resources Using Basic Geostatistics

BLOCK KRIGING
The difference between block kriging and point kriging is that the estimated point is replaced by
a block. Point-to-block correlation is the average correlation between a sampled point, i, and all
points within the block. In practice, a regular grid of points within the block is used. Consequently,
the matrix equation includes point-to block correlations.
The block kriging system is similar to the point kriging system given above. In point kriging, the
covariance matrix D consists of point-to-point covariances. In block kriging, it consists of block-to-
point covariances.
The covariance values CiA is no longer a point-to-point covariance like Ci0 , but the average covari-
ance between a particular sample and all of the points within block A.
Kriging

     
C11 C12 1 W1 C1B
C21 C22 1 ∗ W 2 = C2B
     
1 1 0 µ 1
C0 = 0.2
C1 = 0.8
RY = 500m
RX = 150m
RZ = 150m
R1/R2/R3 = 90/0/0
Spherical
     
1.0 0.1 1 W1 0.56
0.1 1.0 1 ∗ W 2 = 0.12
     
1 1 0 µ 1

Grades Weights
1.1 0.744 (w1)
0.5 0.256 (w2)

Estimation = 1.1*0.744 + 0.5*0.256 = 0.95

Relative location is important. Grade 1.1 gets most of the weight because of RY = 500m at
90◦ rotation (Y axis is rotated 90◦ to the east).
Covariances Ci j between samples and covariances Cib between samples and blocks are calcu-
lated from the variogram function. Covariance function and variogram function are related by
the following formula:

γ(h) = C(0) −C(h)

Interpolation Techniques | 41
©2023 Hexagon

Model Interpolation Tool


All interpolations for the following exercises in this workbook can be done with the Model Interpola-
tion Tool. Model interpolation is an essential part of the modeling process. Our Model Interpolation
Tool will allow you to set up and run your interpolations, save and share your setups, and look into
the details of your methods. It’s easy to set up, fast to run, and transparent to audit.

Data Setup
When creating a new model interpolation using MIT, you will first be prompted with the data setup.
In this panel you can connect to your model as well as your data input. Data input options include
Torque composites, 3D points, downhole points, or MinePlan composites (file 9). MIT can be used
to set up a simple IDW run, kriging, and more advance options. The interpolation method can also
be defined in the data setup panel.

Full interpolation methods/options - Tried and true engines


All interpolation methods are available! We have made sure that the new interface still uses all
the interpolation methods that were available before. And once you have selected the method
you would like; you will be guided and asked for only the information and the parameters that are
relevant for that method.

42 | Model Interpolation Tool


Estimating Resources Using Basic Geostatistics

Interface - Clear and Concise


We have built an interface that is both modern and easy to use. It follows the lead of our new tools
and takes advantage of a ribbon toolbar as well as a navigation bar, a summary window allowing
you to go through your setup and immediately see the state of your interpolation, a workspace for
inputting your parameters, and finally a message window to inform you of any run information.

Primary Search
The primary search panel allows the user to define a search radius around each block. This can
be a spherical search or an ellipsoidal search allowing the user to adjust the search distance de-
pendent on orientation. Here, the user can also apply dynamic unfolding or a relative coordinate
transform.

Model Interpolation Tool | 43


©2023 Hexagon

Choosing the ellipsoidal search option will further reduce the influence of composites along the
minor axis. This is less important in a kriging interpolation as the variograms already handle sample
weighting. In the example below, COMP2 is rejected in general because it is out of the ellipsoidal
search. Using an ellipsoidal search will make a difference after composites are selected. In both
IDW and Kriging case, it will sort composites based on the anisotropic distances, then it will pick the
closest ones if the selection of composites exceeds the max number of composites to use. In the
IDW case, distances used in the IDW formula will also be the adjusted ones. In the case of kriging,
the weighting is handled by the variogram (which be default will be anisotropic if different ranges
are used for different directions). All storing distances are also anisotropic.

Selection Rules
The selection rules panel allows the user to define the minimum and maximum number of com-
posites required per block as well as the maximum number of composites per hole. The maximum
distance to the closest composite can also be constrained (0 is default and will not apply this rule).

44 | Model Interpolation Tool


Estimating Resources Using Basic Geostatistics

Closest composite is within PAR7. Block Closest composite outside PAR7. Block
can be interpolated. cannot be interpolated.

Min number of composites to use = 1. Min number of composites to use = 1.


Since there are 2 composites within PAR4, Since only the min number of composites
PAR8 is ignored thus block can be is available, it’s checked against the
interpolated. smaller PAR8 radius. Composite is outside
of PAR8, thus block cannot be
interpolated.

Min number of composites to use = 1


Since there is 1 composite available
PAR8 is checked thus block can be
interpolated (composite inside PAR8).

Model Interpolation Tool | 45


©2023 Hexagon

Octant Search
In the octant panel, you may further split your search volume into segments. The figures labeled
1,2,3,4 shows the various splits.

1 2 3 4
Octant Quadrant Split Octant Split Quadrant

If max number of adjacent empty If max number of adjacent empty


octants is 2, then this block cannot be octants is 2, then this block can be
interpolated because there are 3 interpolated because there aren’t 3
adjacent empty octants. adjacent empty octants

46 | Model Interpolation Tool


Estimating Resources Using Basic Geostatistics

Composites - Geologic Rules


For geologic matching, select a Model Item and a Composite Item with geologic codes. If code
matching is used, then only composites of the same code as in the block will be used to interpolate
that block. The “after geologic mapping” toggle sets the order in which geological matching and
the check of the distance to the closest composite are performed.

Composites - Filters
Composites used in the interpolation can be filtered out based on attribute values in the compos-
ite.

Calculations – Item Mapping


Grade items used for interpolation are selected. You can run polygonal grade assignment to
compare results. Polygonal interpolated grades can aid in evaluating the reliability of other inter-
polation methods.

Model Interpolation Tool | 47


©2023 Hexagon

Calculations – Store Items


This panel also allows you to select items from the model file for storing your model validating
information, i.e. distance to the closest composite for a particular block.

Calculations – Variography
If you have a variogram file (generated in SIGMA or MSDA) you can define it here.

If you are running this procedure without a variogram parameter file, you can choose to build
the variogram prompting you to define the variogram parameters you have previously calculated
outside of MinePlan. Check the “use variogram file” option if you had previously exported a .var
file from Sigma.

48 | Model Interpolation Tool


Estimating Resources Using Basic Geostatistics

Calculations – Method Options


You may enter a composite weighting factor item although this is not necessary as you have would
have ordinarily factored in weighting when generating the composites.

IDW extra weighting example for 2 composites using power of 2.


Default:

Where: grd = grade, d = distance, wt = weighting item


Apply weighting factor from a weight item after taking inverse distance:

Model Interpolation Tool | 49


©2023 Hexagon

Kriging extra weighting example for 2 composites.


The composite length weighting is done after the Kriging weights are computed.

Where: grd = grade, wt = weighting item, Kwt = Kriging weights


You can further refine your general search parameters to the ellipse which is defined in your var-
iogram(s). If your search ellipse (variogram) has rotation you can define the rotation convention
which was used to calculate it.
The “Max 3D distance” (PAR4), should match the major axis of the ellipsoidal search. The major axis
is the y axis and the minor axis is the x axis regardless of the longest measurement. MinePlan gives
you the option of three standard conventions: GSLIB-MS, MEDS or COORD. Further information on
these can be found in the help documentation.

Model Limits - Model Range


Use the sliders or type in the fields to limit the area of the model interpolated based on coordinates.

Model Limits - Block Limiting


Interpolation can be limited to certain model blocks based on geology codes or rock types.

50 | Model Interpolation Tool


Estimating Resources Using Basic Geostatistics

Advanced - Outliers
Outliers are used to restrict grades above or below a cutoff. It is applied to the primary item used
for interpolation (default is 0 for high and low cutoffs which means no outlier logic is applied).

Run One Block (Debug)


An ideal way of troubleshooting and delving into the details of your interpolation setup is to use
the Model Interpolation Tool’s “Run One Block” function (See figure 6). It allows you to pick one
block and run the interpolation only on that block. It produces a composite coordinate text file,
a block centroid file and an ellipse geometry object (msr) to view in MP3D. You can also view a
table of results in the report file that gets produced in your project directory. One feature of this
dialog is that it will stay open so you can try different parameters and quickly launch “Run One
Block” after tweaking your interpolation setup.

EXERCISE: Interpolate Using Multiple Methods


Use the model interpolation tool to set up a simple IDW run for total copper in primary sulfides. Honor rock
types and mineralization codes (matching codes with composites). Use variogram ranges as guidelines for
search parameters. Repeat for kriging, and store the kriging variance back to the blocks. Calculate total
copper using the polygonal method as well for comparison. You are going to use this as the representative
declustered composite distribution for validation purposes. Restrict the influence of the outlier values.

Repeat interpolation for the rest of the mineralizations. Use relative elevation items if needed. The procedure
relev.dat calculates relative items from a surface and stores them back to the model and drillholes.

Model Interpolation Tool | 51


©2023 Hexagon

Kriging Variance
For each block or point kriged, a kriging variance is
calculated. The block kriging variance is given by: LEARNING OBJECTIVE

σ2OK = CAA − [Σ(λi ∗CiA ) + µ] Calculate and interpret kriging variance.

Where
CAA is the variance of the domain at the scale of
the estimate. In practice, this average block-to-block variance is also approximated by discretizing
the area A into several points. It is important to use the same discretization for the calculation of
point to block covariances in D in the kriging equations. If one uses different discretization for the
two calculations, there is a risk of getting negative error variances.
C1A is the covariance between sample and the Area (block) of estimation.
λi is the kriging weight for each sample.
and
µ is the Lagrange multiplier from the kriging equations.
For the point kriging variance, CAA variance is replaced by the variance of the point samples, or
simply by the sill value of the variogram. CiA is replaced by Ci j (point to point covariance).
Kriging variance does not depend directly on the data. It depends on the data configuration.
Since it is data value independent, the kriging variance only represents the average reliability of
the data configuration throughout the deposit. It does not provide the confidence interval for
the mean unless one makes an assumption that the estimation errors are normally distributed with
mean zero.
However, if the data distribution is highly skewed, the errors are definitely not normal because one
makes larger errors in estimating a higher-grade block than a low-grade block. Therefore, the relia-
bility should be data value dependent, rather than data value independent. For a fixed sampling
size, different sampling patterns can produce significantly different estimation variances. In two
dimensions, regular patterns are usually at the top of the efficiency scale in terms of achieving a
given estimation variance with the minimum number of data, while clustered sampling is the most
inefficient.

52 | Kriging Variance
Estimating Resources Using Basic Geostatistics

You can also express the kriging variance as :

σ2R = σ2z + ΣΣ(λi λ jCi, j ) − 2ΣλiCi,o

Where:
σ2z is the sample variance.
Ci, j is the covariance between samples.
Ci,o is the covariance between samples and location of estimate.
This is essentially the same formula as the initial formula. In the case of block kriging, σ2z is the block
variance. This formula simply says that the kriging variance (error of estimation) is:
The data variance + the covariance between samples - the covariance between the samples
and the block.
Therefore:
• As variance of data increases, error will also increase – 1st positive component
• As covariance between data increases, error will also increase. High covariance between
data means that data may be clustered, thus they should produce higher error -2nd negative
component
• As data gets closer to block, covariance between a composite and block increases, thus
error decreases – 3rd positive component

Kriging variance should not be used by itself to assess confidence or classify reserves.

EXERCISE: Make Views of the Kriging Variance


What do you notice? Where do you get higher values?

Kriging Variance | 53
©2023 Hexagon

Interpolation Debug
Debugging the interpolation run is imperative to
fully understand how changes in the search pa- LEARNING OBJECTIVE
rameters affect the interpolation results. By de-
bugging a run, the user will be able to make a list Debug an interpolation run.
of composites used for interpolating a block and,
more importantly, make a visual representation of
the search parameters.

EXERCISE: Debug an Interpolation


Use the procedure pintrpq.dat to debug one block with the regular interpolation scheme for the mineral-
ization of the block. Open and study the report. Import the ellipsoidal object for viewing in MinePlan 3D
(MP3D).

Change some of the interpolation parameters and check the debug report each time. Evaluate the effect of
the change. Experiment with techniques such as reducing the maximum number of samples or the number
of composites per drillhole.

54 | Interpolation Debug
Estimating Resources Using Basic Geostatistics

Dynamic Unfolding
Complex geology such as overturn folds, ver-
tical intrusions, and non-parallel top/bottom
can be accounted for during interpolation in
MinePlan by applying Dynamic Unfolding.
Dynamic Unfolding uses the surfaces gener-
ated by the Relative Surface Interpolator (RSI)
to calculate distance and direction along
those surfaces, then use the results in interpo-
lation.
The GeoTools MinePlan menu contains the Dynamic unfolding tool. Within the tool, RSI collection
creates surface geometries from a single surface or between two limiting surfaces (or with a sliced
solid). Fast Marching calculates the unfolded distance and direction between every composite
and every point on a grid covering the surfaces. Grid spacing controls the accuracy of the result:
The finer the grid spacing, the more accurate the unfolding results, but the longer it takes to carry
out the unfolding computation.
Dynamic unfolding is important for interpolation, variography, and coding when complex geol-
ogy is present. It improves representation of grade trends, incorporation of geology, and overall
understanding of the deposit.

Without Dynamic Unfolding After Applying Dynamic Unfolding

Dynamic Unfolding | 55
©2023 Hexagon

Visual Model Validation


Visual model validation allows you to interactively
analyze and check the model in 2D and 3D LEARNING OBJECTIVE
modes. Creating grade shells of mineralized zones,
changing between different display styles, chang- View your model in 3D and create model
ing the model display range, limiting the blocks grade shells.
based on item value, controlling the block size and
creating an exposed ore display are some options
you can explore within the Model View Properties dialog.
A grade shell is a solid representation of a code or real value retrieved directly from the 3D block
model (3DBM). It is intended to provide an indication of where the blocks with certain geologic
codes or grade values are located in the model.

EXERCISE: Make Model Views


Make model views of all the interpolation methods for total copper. Visualize the block model against the
informing drillhole data. Consider generating cross sections and assess whether the estimated grades are
reasonable given the nearby informing data.

EXERCISE: Make Grade Shells


Make grade shells based on different total copper cutoffs.

56 | Visual Model Validation


Estimating Resources Using Basic Geostatistics

NOTES

Visual Model Validation | 57


©2023 Hexagon

Graphic Model Validation


After you’ve coded and interpolated the model,
and completed all calculations, you will need LEARNING OBJECTIVE
to produce statistics and a total resource report.
Statistics quantitatively explain the model and can Validate the model using charts and
be used to analyze things like total tonnages in graphs.
each cutoff grade, grade distribution and proba-
bility. Statistics from the model can be compared
to statistics taken from the source drillhole data set. You will validate the model using Sigma tools
such as histograms, grade tonnage curves, swath plots and scatter plots.

EXERCISE: Validate the Model with a Histogram and Grade Tonnage Curves
Make histograms and g/t curves in Sigma for inverse distance, kriging and polygonal total copper grades
and compare them. How close did the estimation methods come to the polygonal? The polygonal method
represents the declustered composite distribution. Theoretically, further adjustment may be needed in the
polygonal distribution to accommodate the change of support (volume variance correction).

58 | Graphic Model Validation


Estimating Resources Using Basic Geostatistics

Validate the Model with a Swath Plot


Repeat the comparison of the three methods via a swath plot in Sigma. Swath plots divide your model area
along grid set boundaries and reports the total tonnage and average grade of blocks between these bound-
aries.

Create Gridsets Create gridsets in the 3 principle directions based on the PCF. Delete any grid-
sets outside of the mineralized zone.

Create Swath Plot Sigma → Statistics tab → Swath Plot → Select stat15.dat block model and
CUID, CUKRG and CUPLY as inputs → Define swaths → Select Grid set →
Update.

Graphic Model Validation | 59


©2023 Hexagon

EXERCISE: Validate the Model with a Scatter Plot


Using the procedure p61701.dat, you can back load the block values to MinePlan Torque. After running
p61701.dat, make a scatterplot in Sigma and compare estimation vs. actual composite value. How close
did you come?

60 | Graphic Model Validation


Estimating Resources Using Basic Geostatistics

EXERCISE: Validate the Model with Declustered Statistics


Declustered composite statistics represent the only known true statistics of the deposit. Make a resource
report in Sigma using a Pivot Report and loading the Declustered File created earlier in the Parameters.

Tonnage Mean
>=0.000 302,995,500.0 0.255
>=0.100 259,518,225.0 0.289
>=0.200 190,090,125.0 0.340
Oxide >=0.300 100,259,300.0 0.426
>=0.500 22,404,720.0 0.618
>=0.800 1,532,794.0 0.982
>=1.000 447,300.0 1.250
>=0.000 2,614,684,000.0 0.184
>=0.100 2,382,713,000.0 0.195
>=0.200 951,586,100.0 0.267
Primary
>=0.300 233,433,375.0 0.362
Sulfides
>=0.500 14,197,540.0 0.582
>=0.800 603,787.5 0.931
>=1.000 135,000.0 1.147
>=0.000 722,918,700.0 0.289
>=0.100 703,430,400.0 0.295
>=0.200 531,694,200.0 0.341
Secondary
>=0.300 278,990,200.0 0.427
Sulfides
>=0.500 57,344,060.0 0.653
>=0.800 8,917,500.0 0.970
>=1.000 3,032,813.0 1.159
>=0.000 3,640,599,000.0 0.211
>=0.100 3,345,661,000.0 0.223
>=0.200 1,673,370,000.0 0.299
Total >=0.300 612,682,800.0 0.402
>=0.500 93,946,320.0 0.634
>=0.800 11,054,080.0 0.970
>=1.000 3,615,113.0 1.170

Graphic Model Validation | 61


©2023 Hexagon

Point Validation
The point validation technique predicts a known
data point using an interpolation plan; surround- LEARNING OBJECTIVE
ing data points are used to estimate the loca-
tion of the point. This technique checks how Use the point validation technique to
well the estimation procedure can be expected compare interpolation scenarios.
to perform. It may suggest improvements, but
it mainly compares interpolation scenarios and
does not determine parameters. It reveals weak-
nesses/shortcomings.
Remember that all conclusions are based on observations of errors at locations where you do
not need estimates. You remove values that you are going to use, so the results are generally
pessimistic.

EXERCISE: Use Point Validation to Report Correlations


Run procedure p52401q.dat and use the same parameters you used in the block interpolation run for total
copper inside primary sulfides. Check the report for correlation between actual and estimation grades,
standard error of estimation and general statistics. In the Optional Parameters panel, backload the Kriging
estimate back to the drillhole database. Finally, create a scatterplot of the Point Check estimate and the
Total Copper values from your Composite source in Sigma. Try adding the dat524.csv as a data source and
create scatter plots for the CUIDW values vs the true values as well.

62 | Point Validation
Estimating Resources Using Basic Geostatistics

Point Validation | 63
©2023 Hexagon

Change of Support
The term “support” at the sampling stage refers
to the characteristics of the sampling unit, such LEARNING OBJECTIVE
as the size, shape and orientation. For example,
channel samples and diamond drill core samples Validate your model using change of sup-
have different supports. At the modeling and mine port techniques.
planning stage, support refers to the volume of the
blocks used for estimation and production. It is important to account for the effect of the support
in our estimation procedures, since increasing the support has the effect of reducing the spread of
data values. As the support increases, the distribution of data gradually becomes more symmetri-
cal. The only parameter that is not affected by the support of the data is the mean. The mean of
the data should stay the same even if we change the support.
There are some methods available for adjusting an estimated distribution to account for the sup-
port effect. The most popular ones are affine correction and indirect lognormal correction. All of
these methods have two features in common:
1. They leave the mean of the distribution unchanged.
2. They change the variance of the distribution by some “adjustment” factor.

Krige’s Relationship of Variance


This is the special complement to the partitioning of variances, which simply says that the variance
of point values is equal to the variance of block values plus the variance of points within blocks.
The equation is given below:

σ2p = σ2b + σ2pεb

Therefore, a further adjustment to the polygonal estimation can be performed before we overlay
it to the estimation distribution in the form of a Grade-Tonnage Curve. The polygonal distribution
represents the declustered composite distribution, but a further adjustment is needed to accom-
modate the change of support. (We are using points-composites to estimate a block.) Therefore,
a variance adjustment factor f can be used.

f = K 2 = σ2p /σ2b (orσ2b /σ2p )

The formula used depends on whether we want to increase or decrease the variance.

64 | Change of Support
Estimating Resources Using Basic Geostatistics

Affine correction
The affine correction is a very simple correction method. Basically, it changes the variance of the
distribution without changing its mean by simply squeezing values together or by stretching them
around the mean. The underlying assumption for this method is that the shape of the distribution
does not change with increasing or decreasing support.
The affine correction transforms the z value of one distribution to z’ of another distribution using the
following linear formula:

z0 = f ∗ (z − m) + m

where m is the mean of both distributions. If the variance of the original distribution is σ2 , the
variance of the transformed distribution will be f*σ2 .
Indirect Lognormal Correction
The indirect lognormal correction is a method that borrows the transformation that would have
been used if both the original distribution and the transformed distribution were both lognormal.
The idea behind this method is that while skewed distributions may differ in important respects
from the lognormal distribution, change of support may affect them in a manner similar to that
described by two lognormal distributions with the same mean but different variances. The indirect
lognormal correction transforms the z value of one distribution to z’ of another distribution using
the following exponential formula:

z0 = azb

where a and b are given by the following formulas:

√ √
a = [m/ ( f ∗ cv2 + 1)] ∗ [ (cv2 + 1)/m]b

b = [ln( f ∗ cv2 + 1)/ln(cv2 + 1)]

where cv is the coefficient of variation, m is the mean and f is the variance adjustment factor.
One of the problems with the indirect lognormal correction method is that it does not necessarily
preserve the mean if it is applied to values that are not exactly lognormally distributed. In that
case, the transformed values may have to be rescaled using the following equation:

z00 = m/m0 ∗ z0

where m’ is the mean of the distribution after it has been transformed.

Change of Support | 65
©2023 Hexagon

EXERCISE: Calculate the Block Variance for Oxides


(Mineralization code =1). Use procedure PSBLKV.DAT. Use the variogram for oxides. For a given block dis-
cretization (use 4x4x1), it will calculate the variance of the points within the block. The sill of the variogram
represents the variance of the points. Using Krige’s equation, it will then calculate the block variance.

The variograms we used in estimation were calculated by the correlation function. What do you need
to do to convert them to non-normalized variance, and why is it okay if you do so?

EXERCISE: Calculate the Variance, Average and Coefficient of Variation


Perform the calculations on the polygonal values within the oxides. You will need those numbers in the next
exercise.

EXERCISE: Perform Volume Variance Correction on Model Data


Use procedure PMODVC.DAT. You may want to consider affine correction if the following equation is true:

(σ2p − σ2b )/σ2p  30%

For the affine correction, you would need the block to polygonal variance ratio and the average grades
of the polygonal blocks. For indirect lognormal correction, you also need the coefficient of variation of the
polygonal block values. Store to a new model item. For indirect lognormal correction, you may need to use
a factor to multiply the grades to keep the mean the same, the problem being that if correction is applied to
a distribution that is not exactly lognormally distributed, correction does not preserve the mean.

66 | Change of Support
Estimating Resources Using Basic Geostatistics

EXERCISE: Rerun the GT Curve Tool


Rerun the GT curve tool for mineralization 1 for the new adjusted polygonal distribution and the estimation
and compare.

Discrete Gaussian Model Correction


Discrete Gaussian model (DGM) correction can also be used instead of the affine and lognormal
correction. In this case, the change in shape of the initial distribution is accounted for and works
for any value. You can use procedure p40204.dat (dump polygonal model values to ASCII))

Change of Support | 67
©2023 Hexagon

Model Classification
Classifying resources may be a subjective task. Var-
ious international reporting codes provide some LEARNING OBJECTIVE
guidance on the standard of work and levels of
uncertainty required to classify resources into Mea- Make model calculations and create a
sured, Indicated and Inferred categories. When multi-run for resource classification.
Resources are converted to Reserves, they are
then classified as Proven and Probable. Some of the model items that can be used for resource
classification are:
• Kriging Variance
• Distance to the closest composite
• Number of composites used
• Average distance to the block
• Furthest Distance of a composite to the block
• Number of drillholes used
• Number of octants
• Pass number
• Relative Variability index using Combined Variance
• Dilution Index

EXERCISE: Assign Class Codes


Assign class codes based on kriging variance, distance to the closest composite from each block and num-
ber of drillholes used to interpolate each block. Use the Model Calculation Tool under the MinePlan 3D
(MP3D) Model menu. Use the following logic in the classification process. This process is just an example to
help users familiarize themselves with model calculations. Use the median and the 3rd quartile of both krig-
ing variance and distance to the closest composite distributions as class cutoffs. How would you calculate
those? Which graph in Sigma reports those two numbers? After calculating those figures, then:

If kriging variance > third quartile (Q3), then CLASS = 3 = Inferred


If distance to the closest composite > Q3, then CLASS = 3 = Inferred
If distance to the closest composite < median and kriging variance < median and number of drill holes > 3,
then CLASS = 1= Measured

Anything else should be Indicated (CLASS = 2)

Hint: Use Python Calculation type.

if $(DISTC) >= 167 or $(KVAR) >=0.705 :


$(CLASS)= 3
elif $(DISTC) <= 95 and $(KVAR) <=0.60 and $(NDDH)>=3 :
$(CLASS)= 1
else: $(CLASS)= 2

68 | Model Classification
Estimating Resources Using Basic Geostatistics

EXERCISE: Make Plan Model Views of CLASS


Overlay the drillholes. How does the classification compare to the drillhole location?

EXERCISE: Calculate Octants


Calculate the number of octants used around each block. Then reclassify indicated to measured and in-
ferred to indicated if there are 3 or 4 quadrants around each block. Make plan maps again of the new
CLASS codes. What do you notice when compared to the map from the last exercise?

Model Classification | 69
©2023 Hexagon

You may want to consider using some kind of a relative variability index to incorporate the actual
variance of the data. One of the disadvantages of using the kriging variance is that it is indepen-
dent of the actual values of the composites (it is calculated directly from the variogram).

70 | Model Classification
Estimating Resources Using Basic Geostatistics

MinePlan offers a Combined Variance calculation that can then be used in combination with the
estimation to create some kind of an index for classification purposes.

CombineVariance = sqrt(localvariance ∗ krigingvariance)

where local variance of the weighted average (σ2w ) is:

σ2w = Σw2i ∗ (Z0 − zi )2 i = 1, n(n > 1)

n is the number of data used,


wi are the weights corresponding to each datum,
Z0 is the block estimate,
and zi are the data values.
Relative Variability Index(RVI) = SQRT(Combined Variance)/ Kriged Grade
Note: This is similar to Coefficient of Variation, C.V. = σ/m

EXERCISE: Rerun with RVI


Rerun the classification using RVI instead of kriging variance. Then make model views of the new class item.

EXERCISE: Create Resource Model Tables


Create resource model tables split by classification codes. Use Sigma.

Model Classification | 71
©2023 Hexagon

Class Cut off Tonnage Mean (%)


>=0.000 2,140,829,250.0 0.228
>=0.100 2,001,742,000.0 0.239
>=0.200 1,124,804,000.0 0.309
Measured >=0.300 452,300,800.0 0.411
>=0.500 78,088,780.0 0.647
>=0.800 11,160,360.0 0.978
>=1.000 3,388,313.0 1.206
>=0.000 1,036,055,000.0 0.191
>=0.100 963,337,200.0 0.199
>=0.200 391,592,175.0 0.275
Indicated >=0.300 104,530,100.0 0.382
>=0.500 11,901,600.0 0.607
>=0.800 876,300.0 0.870
>=1.000 23,437.5 1.090
>=0.000 797,045,000.0 0.160
>=0.100 684,792,500.0 0.175
>=0.200 188,887,050.0 0.259
Inferred >=0.300 37,973,490.0 0.383
>=0.500 4,454,381.0 0.620
>=0.800 547,500.0 0.875
>=1.000
>=0.000 3,973,929,000.0 0.205
>=0.100 3,649,872,000.0 0.217
>=0.200 1,705,283,000.0 0.296
Total >=0.300 594,804,375.0 0.404
>=0.500 94,444,760.0 0.641
>=0.800 12,584,160.0 0.966
>=1.000 3,411,750.0 1.206

72 | Model Classification
Estimating Resources Using Basic Geostatistics

Conclusion & Future Training


We hope you will be able to use the tools covered during this MinePlan software training course to
improve productivity at your mine. As you apply the concepts you have learned, please phone or
email us with questions. Our contact information is listed on the inside cover of this book and on
our website: hexagon.com/company/divisions/mining.
To review or update your support cases, reference our knowledge base, download software up-
dates, check for known issues and/or submit ideas, please log into the Hexagon Community:
community.hexagonmining.com
Note: If you do not yet have access to the Community, please request access by clicking on
”request login”.

Future Training
Whether it takes a few hours or a few days, training with Hexagon’s newest tools can pay instant
dividends. Designed to fit your schedule, our mix-and-match formats support your learning needs
no matter what your expertise with MinePlan software.
Spend some time using our software in day-to-day applications. When you are comfortable
working with MinePlan software, contact us at hexagon.com/company/contact-us/professional-
services to set up your next training.

Estimating Resources Using Basic Geostatistics


Updated: April 12, 2023

©2009-2023 by Leica Geosystems AG. All rights reserved. No part of this document shall be reproduced, stored in a retrieval system, or
transmitted by any means, electronic, photocopying, recording, or otherwise, without written permission from Leica Geosystems AG. All
terms mentioned in this document that are known to be trademarks or registered trademarks of their respective companies have been
appropriately identified. MinePlan® is a registered trademark of Leica Geosystems AG. This material is subject to the terms in the Hexagon
Mining Terms and Conditions (available at http://www.hexagonmining.com/).

Conclusion & Future Training | 73


©2023 Hexagon

74 | Conclusion & Future Training

You might also like