User Guidance: General Principles
User Guidance: General Principles
General principles
For an introduction to this style of statistical sampling please see the article "Efficient samples for
internal control and audit testing" at http://www.internalcontrolsdesign.co.uk/samples/index.html.
This workbook offers two formats for using these techniques. The Test Schedule format is a classic
auditor's test schedule, with space for a list of sample items and up to three tests on each item. As
you answer Y (yes) or N (no) for each test on each item the statistical inferences you can make from
your testing are immediately updated at the top of the page.
Although it makes sense to have some idea of how many items you are likely to test before you start
testing it is not necessary to make a firm decision before you start. Just start testing and stop when
the conclusions are clear enough for your purposes.
The other format is a Summary format, where you can just enter summary results (number of items
tested and number of exceptions found) from several samples or tests, and see statistics inferred from
your evidence.
These formulae are appropriate where you want to assess the exception rate of a process at a point in
time using sampled items processed at around that time. They are also suitable for sampling from
very large populations. They are NOT appropriate for sampling from small populations.
One of the big advantages of Bayesian statistics is that it helps us combine evidence from more than
one source, so to do that both formats give you a way to express what you already believe about the
exception rate using a technique called Equivalent Prior Samples. You don't have to do this but if you
do then the end result from these spreadsheets is statements about the exception rate that combine
what you knew before with what you have learned from your new sample.
Unless you bring in other evidence the formulae in these spreadsheets make the democractic
assumption that all exception rates are equally likely. However, the results of other tests, the
continued trading of a company, and our general experience of processes designed to work tells us
that lower error rates are usually more likely than very high ones. For example, if you are testing
telephone bills and want to know what proportion of them are wrong it is much more likely that 1% are
wrong than 99%.
The idea of Equivalent Prior Samples is that you imagine that your views about the exception rate
before starting on the new sample are the result of having tested a previous sample. You then enter a
number of items tested and a number of exceptions found in that imaginary previous sample.
In practice, of course, it's hard to use your imagination like that. However, you can think how confident
you are that the exception rate is less than 5% and then adjust your prior sample numbers to match
that view. If you're not sure, always err on the side of smaller prior sample sizes, or just leave the prior
sample figures at zero and let all the evidence come from your new sample.
Both formats tell you the confidence that the exception rate is below 5%, below 1% and below 0.1%.
This is to give you a feel for what the complete distribution looks like. The complete distribution can
be viewed on the Test Schedule worksheet to see its general shape.
If you need to prove very low exception rates then you will find that the equivalent prior sample doesn't
usually save you much testing, relatively. However, if you are doing a small number of very expensive
tests and the error rate sought is not particularly low then you may find that bringing in other evidence
helps you cut sample testing costs considerably.
For an example of the Test Schedule format filled in see the Test Schedule example worksheet.
Areas that you might want to type in are surrounded by a black line.
The only part of this that's not obvious from the general introduction and example is how the graph
works.
The graph shows the probability density of beliefs about the exception rate. Put another way, the line
is high over exception rates that are likely. The exception rate can range from 0 to 1 (i.e. 0 to 100%)
but most real exception rates are quite low so I've given you two cells (C64 and C65) to set the upper
and lower boundary of the range you want to see. Don't forget to keep these between 0 and 1.
If you change the range from its starting value then the Excel chart that shows the curves will also
have to be adjusted. Change the scale for the X axis to the range you want to see.
Summary format
For an example of the Summary format see the Summary example worksheet.
Once again, areas that you might want to type in are surrounded by a black line.
The three test lines in the example illustrate the sort of thing that can be achieved. In the first test the
process involved is manual and some exceptions are expected, captured in the equivalent prior
sample which imagines 1 exception found in a test of 20 items. The actual results appear to be better
than this and the auditor has decided to stop once it is established that the exception rate is lower
than 5% with a confidence of just over 90%.
In the second test a much higher level of reliability is expected and sought from a fully automated
process and largely automated test. Prior testing has given quite a narrow expected range for the
exception rate. In the new testing the error rate turns out to be better than in the past and the auditor
decides that being 99% confident that the exception rate is below 0.1% is good enough.
For the third and final test shown there is no previous evidence and the auditor stops testing having
reached 95% confidence that the exception rate is below 1%.
If you think these sample sizes seem rather large or the inferences are weak then you need to adjust
your expectations! These samples and inferences are correct and if your department's policy is to do
30 or 50 items and the belief is that this means 95% confidence in something or other then the policy
is wrong. But don't assume this means you have to increase your sample sizes. I probably means
you have to be more explicit about the value of other evidence.
Confidence that exception rate is:
< 5% 81.5721% 47.2522% 25.0586%
< 1% 12.4549% 1.4573% 0.1718%
< 0.1% 0.1760% 0.0020% 0.0000%
Count of Y 39 38 37
Count of N 1 2 3
alpha 2 3 4
beta 60 49 48
Graph data:
0 0 0 0
0.01 0.01 0 0
0.01 0.02 0 0
0.02 0.02 0.01 0
0.02 0.02 0.01 0
0.03 0.02 0.01 0
0.03 0.02 0.01 0.01
0.04 0.02 0.01 0.01
0.04 0.01 0.01 0.01
0.05 0.01 0.01 0.01
0.05 0.01 0.01 0.01
0.06 0.01 0.01 0.01
0.06 0.01 0.01 0.01
0.06 0 0.01 0.01
0.07 0 0.01 0.01
0.08 0 0.01 0.01
0.08 0 0.01 0.01
0.09 0 0.01 0.01
0.09 0 0.01 0.01
0.1 0 0 0.01
0.1 0 0 0.01
Column C
Test 2
Test 3
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Exception rate
Equivalent Prior Sample New sample results
Count of Count of Count of Count of
Test ref Test description items tested exceptions items tested exceptions
1 Authorisation initialled correctly 20 1 86 1
2 Matched to customer database 1000 2 8100 0
3 Correctly split 0 0 298 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
Confidence that exception rate is:
Alpha Beta < 5% < 1% < 0.1% < 0.01%
3 105 90.7632% 9.2695% 0.0184% 0.0000%
3 9099 100.0000% 100.0000% 99.4270% 6.4521%
1 299 100.0000% 95.0464% 25.8552% 2.9459%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
Confidence that exception rate is:
< 5% 5.0000% 5.0000% 5.0000%
< 1% 1.0000% 1.0000% 1.0000%
< 0.1% 0.1000% 0.1000% 0.1000%
Graph data:
0 0 0 0
0.01 0 0 0
0.01 0 0 0
0.02 0 0 0
0.02 0 0 0
0.03 0 0 0
0.03 0 0 0
0.04 0 0 0
0.04 0 0 0
0.05 0 0 0
0.05 0 0 0
0.06 0 0 0
0.06 0 0 0
0.06 0 0 0
0.07 0 0 0
0.08 0 0 0
0.08 0 0 0
0.09 0 0 0
0.09 0 0 0
0.1 0 0 0
0.1 0 0 0
Column C
Test 2
Test 3
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Exception rate
Column C
Test 2
Test 3
Equivalent Prior Sample New sample results
Count of Count of Count of Count of
Test ref Test description items tested exceptions items tested exceptions
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
5 0 0 0 0
6 0 0 0 0
7 0 0 0 0
8 0 0 0 0
9 0 0 0 0
10 0 0 0 0
Confidence that exception rate is:
Alpha Beta < 5% < 1% < 0.1% < 0.01%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%
1 1 5.0000% 1.0000% 0.1000% 0.0100%