Quality Control and Assurance (QAQC)
Quality Control and Assurance (QAQC)
10
Abstract
The chapter provides a detailed review of the modern principles and
techniques applied in mining industry to assure the samples quality and
their appropriateness for evaluation of the mineral deposits. This group
of techniques is traditionally referenced as Quality Assurance – Quality
Control system and often called by acronym ‘QAQC’. In general, QAQC
procedures consist of monitoring accuracy and precision of analytical
results, controlling the samples contamination, timely diagnostics of the
sample errors and identification the error sources.
Keywords
QAQC • Precision • Accuracy • Standards • Sampling protocol • CV%
v
of the accuracy acceptance can be further simpli- u
u S2 C S2W
fied (CANMET 1998): t LM
jm j 2 k
(10.1.4)
p
jm j 4 SW (10.1.3)
where:
Exercise 10.1.1.a A certified standard of iron-
– certified mean of a given standard sample;
ore was used for assessing accuracy of analytical
m – arithmetic mean of the replicate analyses
method used in a studied laboratory. Certified
of this certified standard sample in the assay
characteristics of the standard sample are as fol-
batch;
lows:
SW – estimated within-laboratory standard devi-
60.73 % Fe – () certified mean of the standard; ation of the replicate analyses of the standard
0.09 % Fe – ( C ) certified within-laboratory stan- samples;
dard deviation of the standard; SLM – estimated between-laboratory standard de-
0.20 % Fe – ( L ) certified between-laboratory viation of the replicate analyses of the stan-
standard deviation of the standard. dard samples;
k D np , is a ratio of total numbers of replicate
This certified standard material has been
analysis (n) of certified standards to a num-
included in a single assay batch and analysed
ber of laboratories (p) participating in Round
10 times. The obtained results were as follows:
Robin test.
60.94, 60.99, 61.04, 61.06, 61.06, 61.09, 61.10,
61.14, 61.21, 61.24.
Exercise 10.1.1.b A certified standard of iron-
Based on these results can the given method
ore was used for assessing the within laboratory
be considered as accurate as required.
precision using the Round Robin analysis. Certi-
Arithmetic mean (m) of the replicate analyses
fied characteristics of the standard sample are as
of the given certified standard is 61.09 % Fe
follows:
and their estimated standard deviation (SW ) is
0.092 % Fe. 60.73 % Fe – () certified mean of the standard;
Substituting these values to equality (10.1.3) 0.09 % Fe – ( C ) certified within-laboratory stan-
gives the following result: dard deviation of the standard;
0.20 % Fe – ( L ) certified between-laboratory
j61:09 60:73j standard deviation of the standard.
D 0:36%Fe .0:092%Fe 4 D 0:368%Fe/ Round Robin test was performed using 34
laboratories (p D 34) and 110 analyses (n D 110)
This result shows that analytical method is accu- of the standard samples have been obtained.
rate (unbiased) as required. Ratio of total numbers of analysis to number
of participating laboratories is k D 110 34
D
10.1.1.2 Estimation Accuracy 3:24. Results of Round Robin test were as
by Repeat Analyses follows:
of the Certified Standards,
Different Laboratories 60.71 % Fe – (m) estimated mean of the standard;
When exploration samples have been analysed 0.10 % Fe – (SW ) estimated within-laboratory
in (p) several different laboratories, the overall standard deviation of the standard;
accuracy of the analytical results can be tested 0.06 % Fe – (SLM ) estimated between-laboratory
using statistical condition (10.1.4): standard deviation of the standard.
138 10 Quality Control and Assurance (QAQC)
Substituting the corresponding values to the same laboratory. These data can be used for
equality (10.1.4) we will obtain estimation of analytical precision of laboratory.
Analytical precision is acceptable if results of
jm j D j60:71 60:73j D 0:020 the replicate analysis of the certified standards
assayed in this laboratory satisfy the statistical
test (10.1.6) (ISO Guide 33):
v v
u u
u 2 S2W u 2
u SLM C u 0:062 C 0:10 2.n1/I 0:95
2
t k D 2t 3:24 .SW =C /2 (10.1.6)
p 34 n1
sion of analytical method. In this case a between- value t.p1/I 0:95 , determined for a given degrees
laboratories precision is assessed using the statis- of freedom (p1) and confidence interval
tical test (10.1.8). Substituting certified parame- (˛ D 0:95). Bias is statistically insignificant
ters and results of the Round Robin analyses to if the equality (10.1.10) is correct.
this equality we obtain:
t.N1/I 0:95
tEXP (10.1.10)
S2W C kS2LM 2
0:10 C 3:24 0:06 2 N1
D D 0:1573
C2 C L2 0:092 C 3:24 0:202 where:
2.p1/I 0:95 233I 0:95
D D 1: p
p1 33 j AVR j N
tEXP D v
u N
uX
Empirical ratio of the variances is less than the t Œ. ai bi /AVR 2 = .N 1/
corresponding them table value indicating that i
analytical method is precise as precise as those
used in the certification of the standard. AVR – is absolute bias, determined as mean
of difference between
" N original# and duplicate
X
10.1.2 Statistical Tests for Assessing sample assays ai bi =N;
the Data Bias Using i
t.p1/I 0:95 – critical value of 0.95 quartile (˛ D
the Duplicate Samples
0:05) of the (t) distribution at (p-1) degrees of
freedom.
The bias in the analytical data can be estimated
using the duplicate samples assayed in the rep-
utable external laboratory (10.1.9).
10.1.3 Diagnostic Diagram: Pattern
" N
# Recognition Method
X
ai bi =N
i This method is based on a fact that specific types
RE .%/ D 100
ma of analytical problems have recognisable patterns
(10.1.9) on the special types of the diagrams. The different
distribution patterns of the analytical results are
where: indicative of the error sources and types.
The pattern recognition technique is most
RE .%/ is a bias value normalised to mean of
effective when applied to certified standards
the original sample assays and expressed as a
(Leaver et al. 1997; Sketchley 1998; Abzalov
percentage;
2008). The assayed values of the certified
ai original sample, assayed in the first laboratory;
standards are plotted onto the diagram on
bi duplicate sample, assayed in reputable external
a batch/time basis (Fig. 10.1). Good quality
laboratory;
analysis will be characterised by a random
ma is a mean of the original sample assays;
distributions of the data points around the
N – number of duplicate samples.
certified mean value on this diagram (Fig. 10.1a)
Eremeev et al. (1982) have suggested to and the 95 % of the data points will lie within two
estimate statistical significance of the bias standard deviations of the mean and only 5 % of
detected using equality 10.1.9. The proposed assays can lie outside the interval of two standard
approach (Eremeev et al. 1982) is based on deviations from the mean. It is essential that the
comparing of experimentally calculated t-student same number of samples should occur above and
value (tEXP ) with the corresponding theoretical below mean.
10.1 Accuracy Control 141
Fig. 10.1 Schematic diagrams showing quality control presence of ‘outliers’ suggesting transcription errors;
pattern recognition method (Reprinted from (Abzalov (c) biased assays; (d) rapid decrease in data variability
2008) with permission of the Canadian Institute of indicating for a possible data tampering; (e) drift of the
Mining, Metallurgy and Petroleum): (a) accurate data, assayed standard values
statistically valid distribution of the standard values; (b)
In some cases the grades of the standard sam- Accurate analyses are also characterised by
ples significantly differ from their certified values lack of systematic data trends on grade vs. order
(Fig. 10.1b). Presence of such ‘outliers’ is most of analysis diagrams. Trends can be recognised
likely indicate for the data transcription errors by a systematic increase ore decrease the assayed
(Abzalov 2008). This feature does not imply data values of the standards (Fig. 10.1e). Another
bias but nevertheless indicate for a poor data man- commonly used criteria for identification of the
agement suggesting the possible random errors in possible trends are such distributions when two
the database. successive points lie outside the two standard
Data bias is easily recognisable by a consistent deviations or four successive points lie outside
shift of a standard sample assays (Fig. 10.1c). the one standard deviation (Leaver et al. 1997).
Usually this occurs because of failed equipment A systematic drift of the assayed standard
calibration or can be caused by changed analyti- values usually indicates for a possible instrumen-
cal procedures in the lab. tal drift. Alternatively, it can be also caused by
Less common distribution pattern is when degradation of the standard samples. Author of
dispersion of the standard sample grades rapidly the current paper is familiar with cases when
decreases (Fig. 10.1d). Such decrease in the characteristics of the standard samples have de-
standards variability is commonly interpreted graded in comparison with their certified values
(Sketchley 1998) as indicative of the data because of inappropriate storing conditions of the
tampering. standards when they were kept in the large jars
142 10 Quality Control and Assurance (QAQC)
Cu (ppm)
300
200
100
0
0 50 100 150 200 250 300 350 400
Sequential Number (in Chronological Order)
and have not been protected against vibrating due differences between the assayed values of sample
to operating equipment. and it’s duplicate is caused by errors associated
Blank assays are also presented on the grade with sample preparations and assaying. Precision
vs. order of analysis diagram (Fig. 10.2) and error is mathematically deduced from differences
arranged in the same manner as the assays of between matching pairs of data, and is usually
the standard samples. The main purpose of using represented as variance of the assayed values
blanks is to monitor laboratory for a possible normalised to the means of the corresponding
contamination of samples which mainly is caused pairs of the data (Abzalov 2008).
by insufficiently thorough cleaning of equipment.
The blank samples are usually inserted after
high grade mineralisation samples and if equip- 10.2.1 Matching Pairs of Data
ment have not been properly cleaned the blank
samples will easily detect it by showing increased Sample duplicates represent the most common
grades of the metal of interest (Fig. 10.2). type of the matching pairs of data. Duplicate
sample is merely another sample collected from
Exercise 10.1.3 Construct the standard samples the same place and following the same rules as
diagnostic diagram using the data in the excel used for collecting the initial (original) sample.
file Exercise 10.1.3.xls (Appendix 1). The file This is the main approach used in mining industry
contains built-in Visual Basic script (Standards) to monitor the samples precision. In practise,
to be used with this exercise. Provide your inter- the duplicate sample can be a second blast hole
pretation of the obtained results. Use the macro sample collected from the same blast hole cone
code (Exercise 10.1.3.xls) with the data from or another half of the drill core. It also can be
your project. a duplicate sample collected at a particular stage
of the sampling protocol, such as coarse rejects
after crushed material has been split using an
10.2 Precision Control appropriate sample reducing devise or second
aliquot collected from the same pulverised pulp.
Precision of the samples is monitored by using When sampling protocol includes several stages
matching pairs of samples. These are the pairs of comminution and subsampling the duplicate
of samples which are processed in a similar samples it is a good practise to take duplicate
manner allowing their comparative analysis. The samples at each subsampling stage.
10.2 Precision Control 143
Preparation procedure and analysis of the du- is applicable to linear cases when different sets of
plicate samples should be the same as used for paired data exhibit systematic differences (bias).
their original samples. The duplicate sample can Pitard (1998) has suggested a special diagnostic
be analysed in the same laboratory as original or diagram, relative difference plot (RDP), which
can be send to different laboratory for interlabo- can be used for detailed analysis of the causes of
ratory control. When duplicates are analysed in poor data precision or sample biases. All these
the same laboratory where original samples were methods have been reviewed by Abzalov (2008)
assayed the variations of the results allows esti- and compared by applying to the same sets of the
mate the precision errors incurred at the particular duplicated pairs of samples collected in operating
stage of the sampling protocol which is repre- mines and mining projects.
sented by a given duplicates. When duplicate
samples are processed in a different laboratory it 10.2.2.1 Method of Thompson –
is a common practise to choose an internationally Howarth
recognised laboratory with audited and approved This method was developed in early 70th
quality. In this case interlaboratory analysis of the (Thompson and Howarth 1973, 1976, 1978;
duplicates samples allows assessing both, preci- Howarth and Thompson 1976) and since then
sion and accuracy, errors of the samples assayed became a popular technique for duplicated data
in the tested laboratory. analysis in the mining industry.
Another type of the matching data pairs are It uses assumption that precision error is nor-
twin holes. This technique is not a precision con- mally distributed and the variation of the samples
trol method and is mainly used for verification of precision, represented as standard deviation of
the previous drilling results. It will be described the duplicated data (SC ), can be expressed as a
separately in the next chapter of the book. linear function of the samples concentration (C)
and the standard deviation at zero concentration
(S0 ) (10.2.1).
10.2.2 Processing and Interpretation
SC D S0 C K C (10.2.1)
of Duplicate Samples
Relative precision (PC ) at the given concentration
The precision error is quantified from the paired
(C) can be determined using the equality (10.2.2)
data through assessing the scatter of the data
which represent precision at one standard devia-
points with corrections for their deviation from
tion confidence level.
y D x line. The different methods are available
for estimation precision error from the paired
PC D SC =C (10.2.2)
data (Garrett 1969; Thompson and Howarth
1973, 1976, 1978; Howarth and Thompson
Precision at one standard deviation confidence
1976; Bumstead 1984; Shaw 1997; Francois-
level is chosen for consistency with other preci-
Bongarcon 1998; Sinclair and Blackwell 2002;
sion estimates, which are discussed below.
Abzalov 2008). The most common methods are
Substituting (SC ) in equality 10.2.2 by it’s
based on the assumption of linear relationships
definition presented in equality 10.2.1 and mul-
between sampling and analytical errors and
tiplying by 100 to represent relative precision
concentration of the assayed metals (Thompson
error as percentage have led to the final formula
and Howarth 1973, 1976, 1978; Howarth and
(10.2.3).
Thompson 1976; Bumstead 1984; Shaw 1997).
Francois-Bongarcon (1998) has extended this
PC D 100 Œ.S0 =C/ C K (10.2.3)
principle to the more complex relationships
described by a quadratic model. Special type
where:
of a linear model is reduced major axis (RMA)
(Sinclair and Bentzen 1998; Davis 2002; PC – precision at concentration C;
Sinclair and Blackwell 2002). This technique S0 – standard deviation at zero concentration;
144 10 Quality Control and Assurance (QAQC)
Cd D 2 S0 = .1 2K/ (10.2.4)
Table 10.1 Measures of Relative Error based on absolute difference between duplicated data pairs
Conventional name of
the error estimator Single duplicate pair formula Reference Comments
jai bi j
AMPD AMPD D 200% Bumstead (1984) This estimator is also known as MPD
.ai C bi /
(Roden and Smith 2001) or ARD
(Stanley and Lawie 2007a)
jai bi j
HARD HARD D 100% Shaw (1997) HARD is simply the half of the
.ai C bi /
corresponding AMPD
where (ai ) is original sample and (bi ) is its duplicate
tists, represent nothing else but products of CV% by the grade classes and calculating precision
p p
2
by constants ( 2) and ( 2 ) respectively (Stanley errors (SR %) separately for each grade class. To
and Lawie 2007a). obtain a reliable estimation of sample precision,
it is recommended to analyse at least 30 data
jai bi j p pairs for each grade class (Eremeev et al. 1982).
AMPD D 100 D 2 CV%
.ai C bi / =2 Where samples have been assayed in the different
(10.2.9) laboratories, the data are grouped and analysed
p separately for each laboratory.
jai bi j 2 Precision error (SR %) is calculated within
HARD D 100 D CV%
.ai C bi / 2 each grade class as ratio of the standard deviation
(10.2.10) of the duplicated data to the mean of all data
within a given grade class (10.2.12).
Stanley and Lawie (2007a) have rightfully noted
that using such statistics like AMPD and HARD, S
which are directly proportional to coefficient of SR % D 100%
C
variation, offer no more information than the CV v
u N
itself. uX
t ai bi 2 = 2N
i
10.2.2.3 Geostatistical Approach D 100% " ! !#
X
N X
N
of the Duplicate Samples ai C bi = 2N
Analysis i i
Alternative approach in estimating precision of (10.2.12)
analytical data involves calculation of various
geostatistical measures of the spatial correlation, where (N) is number of sample pairs, (ai ) is the
in particular different variants of variograms. first sample and (bi ) is the duplicate sample of the
Garrett (1969) has suggested to calculate variance (i) pair.
between original samples and their duplicates It is noteworthy that equality (10.2.12) is the
by lognormally transforming their assay values square root of the relative variogram2 applied to
and then applying formula of the lognormal the duplicated samples and expressed as percent-
variogram (10.2.11). This approach did not find age.
application in practice.
2
Conventional formula (Goovaerts 1997) of the
1 X
N
relative variogram is as follows R .h/ D
2
GAR D Œln .ai / ln .bi /2 (10.2.11) XN 2
2N i 1 ŒZ .xi / Z. xi C h/
2N
, where Z(x) is a
iD1
m2
value of variable (Z) at the location (x), (h) is a vector
Approach developed in the former USSR (Ere-
separating Z(x) from Z .xi C h/ points and (m) is mean of
meev et al. 1982) is based on grouping data the variable [Z(x)]
10.2 Precision Control 147
F. Pitard has suggested (pers. comm.) to calcu- 10.2.2.4 Partitioning of the Precision
late relative variance ( 2FP ) of the matching pairs Error
of the data, (ai ) and (bi ), using formula (10.2.13) Francois-Bongarcon (1998) analysed sampling
presented below. and analytical precision by subdividing the total
0 12 precision error into components. The practical
N
1 X B ai bi C 2 X ai bi 2
N recommendation of that study was to use for-
2
FP D @a Cb A D mula 10.2.16 for estimating precision variance if
2N i i i N i ai C bi
the data represent different grade classes. Error at
2
(10.2.13) one standard deviation is estimated using 10.2.17:
The formula (10.2.13) is known in geostatistics as .ai bi /
a pair-wise3 relative variogram (Goovaerts 1997) DFB
2
D 2 Var (10.2.16)
.ai C bi /
and it is particularly useful for variables charac- q
terised by strongly skewed statistical distributions PDFB .%/ D 100 DFB2
(10.2.17)
and when outliers present.
Average precision error is easily deduced from Calculation of other components of the global
equality (10.2.13) by taking the square root of variance is beyond the scope of the current review
the relative variance and multiplying by 100 % and can be found in the above mentioned paper of
to represent the precision error as a percentage Francois-Bongarcon (1998).
at one standard deviation (10.2.14)
q
2 10.2.2.5 Reduced Major Axis
PFP % D 100% FP (10.2.14)
This is a linear regression technique which takes
into account errors in two variables, original sam-
However, it is easy to see that when pair-wise rel-
ples and their duplicates (Sinclair and Bentzen
ative variogram is applied to duplicated samples
1998; Davis 2002; Sinclair and Blackwell 2002).
it simply measures the relative variance of the
This technique minimises the product of the de-
paired data. The square root of this value is equal
viations in both the X- and Y- directions. This,
to average coefficient of variation. Hence, the
in effect, minimises the sum of the areas of the
two approaches, one that uses the coefficient of
triangles formed by the observations and fitted
variation and the second that uses a geostatistical
linear function (Fig. 10.5). The general form of
approach based on the application of a pair-
the reduced major axis (RMA) is as follows
wise relative variogram for measuring precision
(10.2.18):
error are identical because both are based on
calculating the average relative variance of the
b D W0 C W1 a ˙ e (10.2.18)
paired data (10.2.15).
where (ai and bi ) are matching pairs of the data,
CVAVR .%/ D PFP .%/
(ai ) denotes a primary samples plotted along X
v !
u N axis and (bi ) is duplicate data, plotted along the Y
u 2 X .ai bi /2
D 100 t axis. (W0 ) is Y-axis intercept by the RMA linear
N iD1 .ai C bi /2 model, (W1 ) is the slope of the model to X-axis,
(10.2.15) (e) standard deviation of the data points around
the RMA line.
3
Conventional formula (Goovaerts 1997) of The parameters (W0 , W1 and e) estimated from
the pair-wise relative variogram: PWR .h/ D the set of the matching pairs of data (ai and bi ),
1 X ŒZ .xi / Z. xi C h/2
N
plotted along X-axis and Y-axis, respectively. The
, where Z(x) is a
2N iD1 Z .xi / C Z .xi C h/ 2 slope of RMA line (W1 ) is estimated as ratio
2 of standard deviations of the values (ai and bi )
value of variable (Z) at the location (x) and (h) is a vector (10.2.19).
separating Z(x) from Z .xi C h/ points
148 10 Quality Control and Assurance (QAQC)
Fe (2): duplicate
formed by projection of the
data points and RMA line. 58
RMA technique minimises
the sum of the areas of the
1 : 1 Line
all triangles 57
RMA
56
55
56 57 58 59 60 61
Fe (1): original sample
Error on Y-axis intercept (S0 ) is estimated using Exercise 10.2.2.b Calculate RMA and estimate
(10.2.22) and the error on the slope (SSLOPE ) is relative variances using attached Excel file Exer-
estimated using (10.2.23). cise 10.2.2.b–d.xls (Appendix 1) containing the
Visual basic scripts.
S0 DSt:Dev .bi /
v " # 10.2.2.6 Relative Difference Plot
u
u .1 r/
Mean .ai /
2
Relative difference plot (RDP) has been sug-
t 2C .1 C r/
N St:Dev .ai / gested Pitard (1998) as graphic tool for diagnos-
(10.2.22) tics of the factors controlling precision error. The
s method is based on estimation the differences
St:Dev .bi / .1 r2 / between matching pairs of the data which are
SSLOPE D (10.2.23)
St:Dev .ai / N
4
For consistency with other estimates discussed in this
section the (PRMA (%)) value is estimated at 1 standard
deviation and expressed as percentage.
10.2 Precision Control 149
Fig. 10.6 Relative Difference Plot (RDP) showing dashed lines) are shown for reference. The solid line is
Cu(%) grades of duplicated drill core samples, Cu-project, a smoothed line of the RD(%) values calculated using a
Russia (Reprinted from (Abzalov 2008) with permission moving windows approach. The ‘Calibration Curve’ sets
of the Canadian Institute of Mining, Metallurgy and the relationship between RD% values on the primary y-
Petroleum). Open symbols (diamond) connected by fine axis, the sequential number of the data pairs plotted along
tie-lines are RD(%) values calculated from matching pairs the x-axis and the average grades of the data pairs plotted
of data (i.e., original sample and duplicate). Average on the secondary y-axis
RD(%) value (thick dashed line) and C 2SD values (fine
normalised to the means of the corresponding Link between average grades of the data pairs
pairs of the data (10.2.25). and their sequential numbers are established by
adding ‘Calibration curve’ to the RDP diagram
RD .Relative Differnce/ % (Fig. 10.6).
N Relative difference values of the data pairs
1X .ai bi / usually exhibit a large range of variations; there-
D 100 (10.2.25)
N i .ai C bi / =2 fore, interpretation of this diagram can be facili-
tated by applying the moving window technique
where (ai ) and (bi ) are the matching pairs of the to smooth the diagram (Fig. 10.6).
data and N is the number of pairs. The example in Fig. 10.6 is based on data
Calculated RD(%) values are arranged in in- collected from a massive Cu-sulphide project in
creasing order of the average grades of the data Russia. Construction of this diagram as explained
pairs and then RD(%) values are plotted against in details in Abzalov (2008, 2011) and is briefly
sequential numbers of the pairs whereas the cal- summarised here. At the project approximately
culated average grades of the data pairs are shown 140 core duplicate samples have been analysed
on the secondary y-axis (Fig. 10.6). in an external reputable laboratory as part of the
A more traditional approach would be to plot technical due diligence. The results, when plotted
RD(%) values against the average grades of the on an RDP diagram (Fig. 10.6), show that copper
data pairs. However, in that case diagnostic abil- assays of low grade samples (Cu < 1.1 %) are
ity of this diagram is marred by uneven distribu- biased. Assayed values have significantly under-
tion of the data between grade intervals and plot- estimated the true copper grade of those samples.
ting RD(%) values against sequential numbers of Another feature revealed by this diagram is that
the data pairs allows to overcome this problem. low grade samples (Cu < 1.1 %) exhibit exces-
150 10 Quality Control and Assurance (QAQC)
250. 2.5
200.
2.0
150.
Relative Difference (%)
50.
1.0
0.
-50.
0.5
-100.
-150. 0.0
0. 100. 200. 300. 400. 500. 600.
Sequential Number of the Data Pairs
Fig. 10.7 Cu% grades of the duplicated drill hole samples plotted on the RDP diagram. RD% values are arranged by
length of the samples
Cu-Mo porphyry, Cu (%) 0.003–14.4 1.45 398 1.00 1.4 7.1 10.0 7.1 3.9 1st (Coarse) split
USA Mo (%) 0.005–1.14 0.03 398 1.00 4.0 14.2 20.0 14.2 14.2
Cu (%) 0.015–9.6 1.43 346 1.00 0.9 2.5 3.6 2.5 1.8 Pulp duplicate
Mo (%) 0.005–0.315 0.03 346 1.00 2.7 11.5 16.3 11.5 7.3
Iron Ore, Deposit 1, Fe (%) 50.63–67.37 62.2 228 0.94 0.6 1.4 2.0 1.4 1.9 Field duplicate
Pilbara, Australia Al2 O3 (%) 0.19–7.38 2.06 228 0.92 5.8 12.4 17.5 12.3 26.3
SiO2 (%) 0.7–26.0 3.45 228 0.95 7.1 13.7 19.3 13.7 30.1
LOI (%) 0.83–10.95 4.89 228 0.98 2.2 4.9 6.9 4.9 7.2
Iron Ore, Deposit 2, Fe (%) 1.84–67.3 51.27 8088 1.00 0.6 2.2 3.0 2.1 1.8 Field duplicate
Pilbara, Australia Al2 O3 (%) 0.11–50.66 5.7 8088 1.00 2.3 6.9 9.8 6.9 7.7
SiO2 (%) 0.68–95.96 12.56 8088 1.00 2.3 7.0 9.9 6.9 6.8
LOI (%) 0.34–26.03 7.38 8088 1.00 1.4 2.5 3.5 2.5 3.2
(continued)
151
152
Comparison of the results in Table 10.2 difference between the paired data to their sum
shows that precision errors produced by the ( .ai bi /
.ai Cbi / ) and then estimates variance of this
Thompson-Howarth technique are consistently complex variable. Results of this method are
the lowest among all the reviewed methods. similar to the average coefficient of variation
These results are not surprising because they CVAVR (%) (Table 10.2). However, formula
reflect the assumption in the Thompson-Howarth (10.2.16) seems to be unnecessarily complicated
method that measurement errors are normally in comparison to the conventional CVAVR (%)
distributed (Thompson and Howarth 1973, 1976, approach (10.2.8).
1978; Howarth and Thompson 1976). Based on The RMA method consists of the calculation
this assumption, the method uses the median of the RMA line and related parameters and is
value of absolute differences of the data pairs for particularly useful for identifying bias between
calculation of precision error, and this inevitably paired data (Sinclair and Bentzen 1998; Davis
produces biased results when errors are not 2002; Sinclair and Blackwell 2002; Abzalov
normally distributed. To overcome this problem, 2008). This method can be used to calculate the
Stanley (2006) suggested a modification to the precision error estimate from dispersion of the
Thompson-Howarth method that calculates the data points about the RMA function (10.2.24).
root mean squares of the absolute pair differences However, the errors estimated from the RMA
instead of their medians. This alternative model (PRMA (%)) can significantly differ from
approach (Stanley 2006; Stanley and Lawie CVAVR (%) values (Table 10.2). These differences
2007b) is a more robust estimator in comparison are likely due to sensitivities of the RMA model
to the conventional Thompson-Howarth method to the presence of outliers, which makes this
because it does not require a normality technique inappropriate for strongly skewed
assumption. However, this modification is likely distributions.
to be more sensitive to outliers, and should In summary, the author concurs with the
therefore be rigorously tested in order to better proposal of Stanley and Lawie (2007a) that
understand its possible limitations when applied CVAVR (%) be used as the universal measure
to various geological materials. of relative precision error. Based on numerous
A second group of estimators include AMPD case studies, the appropriate levels of sample
and other similar statistical methods (Table 10.1), precisions are proposed for different types of
which are based on calculation of the absolute deposits in Table 10.3, and it is suggested that
differences between the original and duplicate these levels be used as approximate guidelines
samples normalised by the mean grades of the for assessing analytical quality.
data pairs (Bumstead 1984; Shaw 1997; Roden It is important to remember that the values in
and Smith 2001). Relative errors obtained from Table 9.3, although based on case studies of the
this approach are proportional to coefficients mining projects, may not be always appropriate
of variation (Stanley 2006; Stanley and Lawie because mineral deposits can significantly differ
2007a) and represent nothing more than the by the grade ranges, statistical distribution of the
product of the coefficient of variation (CV) and studied values, mineralogy, textures and grain
a constant. Hence, the various statistics included sizes.
in this group (e.g., AMPD, HARD) offer no Later it was shown (Abzalov 2014) that be-
more information than the coefficient of variation cause estimation of CVAVR (%) is based on the
itself (CVAVR (%)) which is suggested to use as a same expression as pair-wise relative variogram
universal measure of the relative precision error therefore when it is used for estimating the sam-
(Stanley and Lawie 2007a; Abzalov 2008, 2014). ples precision the assay data variance can be
The approach suggested by Francois- directly compared with spatial (i.e. geological)
Bongarcon (1998) calculates the ratio of the variability of the studied variable.
154 10 Quality Control and Assurance (QAQC)
Table 10.3 Best and acceptable levels of the precision errors proposed as reference for evaluating the mining projects
Mineralisation type / deposit Metal Best practise Acceptable practise Sample type
Gold, very coarse grained Au (g/t) 20 (?) 40 Coarse rejects
Gold, coarse to medium Au (g/t) 20 30 Coarse rejects
grained Au (g/t) 10 20 Pulp duplicate
Cu-Mo-Au porphyry Cu (%) 5 10 Coarse rejects
Mo (%) 10 15
Au (g/t) 10 15
Cu (%) 3 10 Pulp duplicate
Mo (%) 5 10
Au (g/t) 5 10
Iron Ore, palaeochannel Fe (%) 1 3 Field duplicate
hosted goethitic mineralisation Al2O3 (%) 10 15
SIO2 (%) 5 10
LOI (%) 3 5
Cu-Au-Fe skarn and Cu (%) 7.5 15 Coarse rejects
iron-oxide associated Cu-Au Au (g/t) 15 25
Cu (%) 5 10 Pulp duplicate
Au (g/t) 7.5 15
Ni-Cu-PGE sulphide deposit Ni (%) 10 15 Coarse rejects
Cu (%) 10 15
PGE 15 30
Ni (%) 5 10 Pulp duplicate
Cu (%) 5 10
PGE (g/t) 10 20
Detrital Ilmenite sands Total Heavy Minerals (%) 5 10 Field duplicate
(Reprinted from (Abzalov 2008) with permission of the Canadian Institute of Mining, Metallurgy and Petroleum)
studies because the earlier samples were found of all sampling and preparation stages are properly
a suboptimal quality. Another common situation controlled.
is when the more advanced stages of the project Finally, all procedures should be documented
studies are delayed because of additional work and personnel responsible for their implementa-
needed to verify the earlier drilling results. tion and control determined and instructed. It is
The first step is to design sample preparation necessary to assure that geological team, work-
procedures assuring that they are optimally suited ing at the mine or developing project, regularly
for a studied mineralisation. The good starting reviews the QAQC results in order to timely
point is to estimate Fundamental Sampling Error diagnose the sampling errors. The author after
(FSE) and plot the proposed protocol on a sam- reviewing many different mines found the most
pling nomogram (Fig. 9.6). This approach allows effective and practically convenient periodicity
to optimise such parameters like weight of the ini- for the data quality review is when the QAQC
tial sample, particle sizes after each comminution results were checked by in every analytical batch
stage and sizes of the reduced samples. Based on and summary QAQC reports have been prepared
this study all stages of the samples preparation for approval by chief geologist on the monthly
are clearly determined and parameters of the basis. The monthly reports should contain several
process are quantified. These parameters need diagrams showing performance of the reference
to be considered choosing equipment for sample materials and duplicates. The most commonly
preparation. In particular, it is necessary to assure used diagrams are (Abzalov 2008, 2011):
that capacity of equipment is matching to the
proposed protocol. • standard samples diagnostic diagram (Fig. 10.1);
Established sampling protocol needs to be • diagram of the blank samples (Fig. 10.2);
documented. It is a good practice to represent it • scatter-diagram of the duplicate samples
graphically, as the sample preparation flow chart showing RMA line and CV% (Fig. 10.5)
and make it easily accessible for the geological • RDP diagram (Fig. 10.6).
team.
The next step after sampling protocol has been The calculated precision variances shell be
optimised is to add the quality control proce- compared with their acceptable levels. The levels
dures. At this stage it is necessary to decide how of acceptable precision errors and deviation of
many duplicates will be collected and how they the standards from their certified values should
are taken. It is also necessary to decide how many be clearly determined and documented as part of
reference materials shell be inserted with each the QAQC procedures.
sample batch and develop procedures allowing
to disguise the standards and blanks. The project
management at this stage is making decision if 10.4.2 Frequency of Inserting QAQC
they will develop the matrix matched standards, Material to Assay Batches
specially prepared for the studied mineralisation,
or will be using commercially available certified Number of quality control samples and frequency
standards. of their insertion to analytical batches should be
QAQC procedures should be developed to- sufficient for systematic monitoring of the assays
gether with the new sampling programme and quality. Recommended quality control materials
if the latter is modified, the associated QAQC vary from 5 to 20 % (Garrett 1969; Taylor 1987;
map is often being changed too. It is a good Vallee et al. 1992; Leaver et al. 1997; Long
practice (Abzalov 2008, 2011) to plot the quality 1998; Sketchley 1998) depending on minerali-
control procedures directly on the sample prepa- sation type, location of the mining project and
ration flow-chart. Such combined diagrams are stage of the project evaluation. Brief overview of
useful practical tools helping to implement and the different recommendations on frequency of
administrate the QAQC procedures assuring that insertion of the QAQC material is given below.
156 10 Quality Control and Assurance (QAQC)
Garrett (1969) has recommended that approx- the actual samples. However, standard samples
imately 10 % of geochemical samples should be alone cannot identify biases introduced at differ-
controlled by collecting the duplicate samples. ent stages of sample preparation. Anonymity of
Taylor (1987) recommends that from 5 to 10 % the standards is another issue, because standards
of samples analysed by laboratory should be a can be easily recognised in the sample batches
reference material. Leaver et al. (1997) suggest and be treated more thoroughly by laboratory per-
to analyse one in-house reference material with sonnel. To overcome this problem some duplicate
every 20 assayed samples and also include to the samples should be analysed in an external, rep-
same batch at least one certified standard. utable laboratory as part of the accuracy control.
Long (1998) has suggested that to control It is suggested that at least 5 % of the total anal-
samples quality in the mining projects it is neces- ysed duplicates, including pulp duplicates and
sary to do as follows: at least 5 % of pulps should coarse rejects, should be analysed in the reputable
be checked by independent laboratory; 5 % of external laboratory.
field and/or coarse rejects should have a second
pulp prepared and analysed by the primary lab-
oratory; every sample batch should include from 10.4.3 Distribution of the Reference
1 to 5 % of standard reference materials, such as Materials
certified standards and blanks. Good practise is
to analyse at least 5 % of the duplicates in the Standards should be inserted with such frequency
different laboratory (Long 1998). that allows to constantly monitor the possible
Sketchley (1998) has recommended that every instrumental drifts and biases. In general, the best
sample batch submitted to analytical laboratory practice is to insert standards and blanks to every
should include from 10 to 15 % quality control sample batch. Good practise is to use more than
samples. In particular, every batch of 20 samples one standard, so that their values are bracketing
should include at least one standard, one blank the practical range of the grades in the actual
and one duplicate sample (Sketchley 1998). samples. Distribution of the reference material
Guide to Evaluation of Gold Deposits (Vallee within the batch should allow detecting possible
et al. 1992) recommends that at least 10 % of biases of the results and at the same time these
the determinations in an exploration or mining reference samples should remain anonymous.
projects should be QAQC samples, including
standards, blanks and duplicates.
Based on the review of above mentioned 10.4.4 Distribution of the Duplicate
published QAQC procedures and numerous Samples
case studies by the current author (Abzalov and
Both 1997; Abzalov 1998, 1999, 2007; Abzalov Practical recommendations on optimisation the
and Humphreys 2002a; Abzalov and Mazzoni duplicated sample analysis with a particular
2004; Abzalov and Pickers 2005; Abzalov et al. emphasis on the duplicates distribution in the
1997, 2010, 2015) it is suggested that reliable analytical batches have been presented and
control of sample precision is achieved by using discussed by many researchers (Thompson
approximately 5–10 % of field duplicates, and and Howarth 1978; Sketchley 1998; Vallee
3–5 % of the pulp duplicates. Duplicate samples et al. 1992; Abzalov 2008). The basic rules
should be prepared and analysed in the primary of collecting of the duplicate samples are
laboratory. summarised below.
In order to detect bias in analytical results, it Duplicate samples should be chosen in such
is necessary to include 3–5 % of standard refer- manner that all stages of data preparation are
ence materials with every sample batch. A good properly addressed and the precision errors
practice is to use more than one standard, so that associated with all sub-sampling stages are
their values span the practical range of grades in adequately estimated by the sample duplicates.
10.4 Guidelines for Optimisation of the Sampling Programmes 157
This means that sample duplicates should include collected or its amount is simply insufficient to
field duplicates, coarse reject duplicates and the make a representative duplicate. For example, if
pulp duplicates (Long 1998). A special attention drill core is sampled by cutting a half core it is
should be given to the field duplicates as they are unwise to collect quarter core as a duplicate as
mostly informative for estimation of the overall this will produce a duplicate samples of a twice
precision of samples. When rotary drilling is smaller weight than the original samples. Such
used, the field duplicates, which in this case are duplicates are likely to produce a larger precision
called “rig” duplicates, are collected from the error than that of the original samples. Using all
sample splitting devices built-in to the drill rigs. remaining second half of core as duplicate is also
These can be rotary, cone or riffle slitters. In suboptimal practice as it breaches auditability of
case of diamond core drilling the field duplicates the data. The problem can be partially resolved if
are represented by another portion of core. Field to take duplicate using the approach shown on the
duplicates of the blast hole samples should be Fig. 10.8.
another sample, taken from the same blast hole In case when all drill samples are assayed, the
cone as the original sample and following exactly latter is a common case at the bauxite mines,
the same procedures. the only possibility to assess precision of the
Coarse reject duplicates consist of material drill samples is to use twin holes. However, it
representing output from crushers. It often can is necessary to account for the geological factor
be more than one type of coarse rejects when (Abzalov 2009).
samples preparation requires several stages of It is important to assure that duplicate
crushing and splitting. In this case, coarse reject samples should be representative for the given
duplicates should be collected for each stage deposit covering the entire range of the grade
of crushing and/or grinding followed by sample values, mineralisation types, and different
reduction. geological units and have a good spatial coverage
Operations, using large pulverisers, such as of the deposit. It is difficult to achieve the
LM5, usually don’t have coarse rejects as all representativeness of the duplicates when they
collected sample is pulverised to affine pulp. In collected at random from the sample batches.
this case, it is extremely important to collect Disadvantage of randomly selected duplicates
and analyse the field duplicates to understand the is that most of them will represent the most
overall repeatability of the assayed results. abundant type of the rocks, usually barren or
Collecting the sample duplicates it is neces- low-grade mineralisation. The ore grade and, in
sary to remember that they should be identical to particular, high-grade intervals are often poorly
the sample which they are duplicating. Unfortu- represented in randomly chosen duplicates.
nately, this is not always possible as the chem- They can also be non-representative regarding
ical or physical characteristics of the remaining mineralisation types, their spatial distribution or
material can be altered after sample has been grade classes. To meet these two conditions,
Howarth R, Thompson M (1976) Duplicate analysis in Sinclair AJ, Blackwell GH (2002) Applied mineral in-
geochemical practice: Part 2, examination of pro- ventory estimation. Cambridge University Press, Cam-
posed method and examples of its use. Analyst 101: bridge, p 381
699–709 Sketchley DA (1998) Gold deposits: establishing sam-
ISO Guide 33 (1989) Uses of certified reference materials. pling protocols and monitoring quality control. Exp
Standards Council of Canada, Ontario, p 12 Min Geol 7(1–2):129–138
Kane JS (1992) Reference samples for use in analytical Stanley CR (2006) On the special application of
geochemistry: their availability preparation and appro- Thompson-Howarth error analysis to geochemical
priate use. J Geochem Exp 44:37–63 variables exhibiting a nugget effect. Geochem Explor
Leaver ME, Sketchley DA, Bowman WS (1997) The Environ Anal 6:357–368
benefits of the use of CCRMP’s custom reference ma- Stanley CR, Lawie D (2007a) Average relative error in
terials. Canadian certified reference materials project. geochemical determinations: clarification, calculation
In: Society of mineral analysts conference. MSL No and a plea for consistency. Exp Min Geol 16:265–274
637, p 16 Stanley CR, Lawie D (2007b) Thompson-Howarth error
Long S (1998) Practical quality control procedures in min- analysis: unbiased alternatives to the large-sample
eral inventory estimation. Exp Min Geol 7(1–2):117– method for assessing non-normally distributed mea-
127 surement error in geochemical samples: Geochem-
Pitard FF (1998) A strategy to minimise ore grade recon- istry: Exploration. Environ Anal 7:1–10
ciliation problems between the mine and the mill. In: Taylor JK (1987) Quality assurance of chemical measure-
Mine to mill. AusIMM, Melbourne, pp 77–82 ments. Lewis Publishers, Michigan, p 135
Roden S, Smith T (2001) Sampling and analysis protocols Thompson M, Howarth R (1973) The rapid estimation
and their role in mineral exploration and new and control of precision by duplicate determinations.
resource development. In: Edwards A (ed) Mineral Analyst 98(1164):153–160
resources and ore reserve estimation – the AusIMM Thompson M, Howarth R (1976) Duplicate analysis in
guide to good practise. AusIMM, Melbourne, geochemical practice: Part 1. Theoretical approach
pp 73–78 and estimation of analytical reproducibility. Analyst
Shaw WJ (1997) Validation of sampling and assay- 101:690–698
ing quality for bankable feasibility studies. In: The Thompson M, Howarth R (1978) A new approach to the
resource database towards 2000. AusIMM Illawara estimation of analytical precision. J Geochem Exp
branch, Wollongong, Australia, pp 69–79 9(1):23–30
Sinclair AJ, Bentzen A (1998) Evaluation of errors in Vallee M, David M, Dagbert M, Desrochers C (1992)
paired analytical data by a linear model. Exp Min Geol Guide to the evaluation of gold deposits: Geological
7(1–2):167–173 Society of CIM, Special Volume 45, p 299