100% found this document useful (2 votes)
362 views20 pages

A Method For Obtaining and Analyzing Sensitivity Data

Specimens, Standard deviation, Explosives, Sensitivity analysis, Sample size, Preliminary estimates, Statistical variance, Confidence interval, Dosage, Medical research

Uploaded by

tetay javier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
362 views20 pages

A Method For Obtaining and Analyzing Sensitivity Data

Specimens, Standard deviation, Explosives, Sensitivity analysis, Sample size, Preliminary estimates, Statistical variance, Confidence interval, Dosage, Medical research

Uploaded by

tetay javier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Journal of the American

Statistical Association
Publication details, including instructions for authors
and subscription information:
http://www.tandfonline.com/loi/uasa20

A Method for Obtaining and


Analyzing Sensitivity Data
a b
W. J. Dixon & A. M. Mood
a
University of Oregon
b
Iowa State College
Published online: 11 Apr 2012.

To cite this article: W. J. Dixon & A. M. Mood (1948) A Method for Obtaining and
Analyzing Sensitivity Data, Journal of the American Statistical Association, 43:241,
109-126

To link to this article: http://dx.doi.org/10.1080/01621459.1948.10483254

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the
information (the “Content”) contained in the publications on our platform.
However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness,
or suitability for any purpose of the Content. Any opinions and views
expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the
Content should not be relied upon and should be independently verified with
primary sources of information. Taylor and Francis shall not be liable for any
losses, actions, claims, proceedings, demands, costs, expenses, damages,
and other liabilities whatsoever or howsoever caused arising directly or
indirectly in connection with, in relation to or arising out of the use of the
Content.

This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden. Terms & Conditions of access and use can be found at
http://www.tandfonline.com/page/terms-and-conditions
A METHOD FOR OBTAINING AND ANALYZING
SENSITIVITY DATA*
W. J. DIXON
University of Oregon
AND
A. M. MOOD
Iowa State College
The standard method of dealing with sensitivity of dosage-
mortality data is the probit technique developed by Bliss and
Fisher. This paper provides an alternative technique based on
a special system for obtaining such data. It has some ad-
vantages when observations must be taken on individuals
rather than groups of individuals, and it may be preferred in
certain other situations.

INTRODUCTION

investigations often deal with continuous variables


E X P E RI ME NTAL
which cannot be measured in practice. For example, in testing the
sensitivity of explosives to shock, a common procedure is to drop a
weight on specimens of the same explosive mixture from various
heights. There are heights at which some specimens will explode, and
others will not, and it is assumed that those which will not explode would
explode were the weight dropped from a sufficiently greater height. It
is supposed, therefore, that there is a critical height associated with
each specimen, and that the specimen will explode when the weight is
dropped from a greater height and will not explode when the weight
is dropped from a lesser height. The population of specimens is thus
characterized by a continuous variable-the critical height-which
cannot be measured. All one can do is select some height arbitrarily
and determine whether the critical height for a given specimen is less
than or greater than the selected height.
This situation arises in many fields of research. Thus in testing insec-
ticides, a critical dose is associated with each insect, but one cannot
measure it. He can only try some dose and observe whether or not
the insect is killed, that is, observe whether the critical dose for that
insect is less than or greater than the chosen dose. The same difficulty
arises in pharmaceutical research dealing with germicides, anesthetics,
• This paper is in part an adaptation of a memorandum submitted to the Applied Mathematics
Panel by the Statistical Research Group, Princeton University. The Statistical Research Group oper-
ated under a contract with the Office of Scientific Research and Development, and was directed by the
Applied Mathematics Panel of the National Defense Research Committee.

109
110 AMERICAN STATISTICAL ASSOCIATION

and other drugs; in testing strength of materials; in psycho-physical


research dealing with threshold stimuli; and in several areas of biologi-
cal and medical research.
In true sensitivity experiments it is not possible to make more than
one observation on a given specimen. Once a test has been made the
specimen is altered (the explosive is packed; the insect is weakened)
so that a bona fide result cannot be obtained from a second test. The
common procedure in experiments of this kind is to divide the sample
of specimens into several groups (usually but not necessarily of the
same size) and to test one group at a chosen level, a second group at a
second level, and so on. The data consist of the numbers affected and
not affected at each level. A method of analyzing such data (variously
called "sensitivity" data, "all or none" data, "quantal responses") has
been developed by Bliss and Fisher [references 1, 2], and discussed by
other writers [3, 4, 5, 6].

THE ".UP AND DOWN" METHOD

A new technique for obtaining sensitivity data has been developed


and used in explosives research. The authors became acquainted with
this new method in 1943 at the Explosives Research Laboratory, Bruce-
ton, Pennsylvania. It has come to be called the "up and down" method.
The method may be employed in any sensitivity experiment, but we
shall discuss it in terms of the explosives to avoid general terminology.
The technique is to choose some initial height ho, and a succession
of heights hl, h2 , h3, • •• above ho together with a succession h_l, h_2 ,
h:», . . . below h o• The first specimen is tested by dropping the weight
from height h o• If the first specimen explodes, the second specimen will
be tested at h_1, otherwise the second specimen will be tested at ha.
In general, any specimen will be tested at the level immediately below
or immediately above the level of the previous test according as there
was or was not an explosion on the previous test. The result of such an
experiment might be portrayed as in Figure 1 where the x's represent
explosions and the o's non-explosions. The first test is on the left at the
highest level; this was a success (explosion) so the second test was made
at the next lower level and was also a success; the third test was there-
fore made at the level below that of the second and since it was a failure
the fourth test was made at the level above that of the third test.
The primary advantage of this method is that it automatically con-
centrates testing near the mean. We shall see later that this increases
the accuracy with which the mean can be estimated. Or in other words,
OBTAINING AND ANALYZING SENSITIVITY DATA 111
RECORD OF A SAMPLE OF SIXTY TESTS
Normallsed Number of
Heia:ht st. 0"

2.0 1
1.7 10
1.4 o x 0 x xxxxooxoxxxoxxxoxxoxxox 18 9
1.1 o xooooo o 000 000 00 00 x o 2 18
0.8 o o 2

FIGURE 1

for a given accuracy the up and down method will require fewer tests
than the ordinary method of testing groups of equal size at preassigned
heights. The saving in the number of observations may be of the order
of 30 to 40 per cent (see Appendix A).
Another advantage is that the statistical analysis is quite simple in
certain circumstances whereas the analysis for the ordinary method is
rather tedious.
The method has one obvious disadvantage in certain kinds of experi-
ments because it requires that each specimen be tested separately. This
is not important in explosives experiments because each test must be
made separately anyway. But in tests of insecticides, for example, a
large group of insects can sometimes be treated as easily as a single
one, and in large experiments of this kind any advantage of the up and
down method might well be outweighed by this requirement of single
tests. Even here, if expensive laboratory animals were being used, the
advantage in economy of tests might offset the trouble of making single
tests.
CONDITIONS ON THE EXPERIMENT

The statistical analysis of data obtained can be quite simple provided


the experiment satisfies certain conditions. Less restrictive conditions
must be fulfilled in order that any analysis will be possible. These will
be discussed here and the actual analysis will be given in the following
section.
In the first place, the analysis requires that the variate under
analysis be normally distributed. In practice the variate of interest to
the research worker can rarely be considered to be normally distributed.
It is therefore necessary that the natural variate be transformed to
one which does have the normal distribution. This is readily done pro-
vided the research worker has enough experience and data on his ma-
terial to be able to specify rather accurately the shape of his distribution
function. It is often the case in dosage mortality experiments and in
112 AMERICAN STATISTICAL ASSOCIATION

experiments on explosives that the logarithm of the dosage concentra-


tion or of the height is reasonably normally distributed. But in other
areas of research, and sometimes in these areas, other transformations
are more appropriate [7].
If one has no idea of the shape of his distribution function then the
data of the experiment itself must be used to provide this information.
The common procedure here is to compute the percentage affected at
each level and plot these percentages on arithmetic probability paper
against various functions of the variate in question. Usually one can
soon discover what sort of function will force the percentages to lie
sensibly along a straight line. There are, of course, infinitely many
functions to choose from; the chosen function should be as simple as
possible consistent with whatever knowledge is available concerning the
nature of the material at hand.
We have already mentioned that the up and down method is par-
ticularly effective for estimating the mean. It is not a good method for
estimating small or large percentage points (for example, the height at
which 99 per cent of specimens explode) unless normality of the dis-
tribution is assured. In fact no method which uses the normal distribu-
tion can be relied on to estimate extreme percentage points because
such estimates depend critically on the assumption of normality. In
most experimental research, it is possible to find simple transformations
which make the variate essentially normal in the region of the mean,
but to make it normal in the tails is quite another matter. Nothing
short of an extensive exploration of the distribution involving perhaps
thousands of observations will suffice here. Bartlett [8] has recently
presented an interesting technique for dealing with this problem.
A second condition on the experiment is that the sample size must be
large if the analysis to be described is to be applicable. As it turns out,
the effective sample size is only about half the actual sample size. The
statistical analysis is based on large sample theory so that if one uses
the analysis on a sample of size forty, he will in effect be using large
sample theory on a sample of size twenty. Measures of reliability
may well be very misleading if the sample size is less than forty or
fifty.
A further condition is necessary if the statistical analysis is to be
simple. One must be able to estimate roughly in advance the standard
deviation of the normally distributed transformed variate. The inter-
val between testing levels should be approximately equal to the stand-
ard deviation. This condition will be well enough satisfied if the inter-
val actually used is less than twice the standard deviation. This require-
OBTAINING AND ANALYZING SENSITIVITY DATA 113
ment is not severe, for research workers who repeatedly perform these
requirements on essentially similar materials can usually make very
good preliminary estimates. This is the case in explosives research or
biological assay, for example. This circumstance (of repeated experi-
ments) is precisely the one in which a simple analysis is most desirable.
STATISTICAL ANALYSIS

The simple method of analysis given in this section is applicable only


when all the conditions described in the preceding section are fulfilled.
The theory underlying the method is given in Appendix A. The more
complex analysis required when the levels are not equally spaced or
when the distance between levels exceeds twice the standard deviation
is given in Appendix B.
We again revert to the explosives experiment in describing the meth-
od. Suppose it is known for the given type of explosive that the log-
arithms of the critical heights are normally distributed. Letting h
represent the height, y = log h will then be the normally distributed
variate. We shall call y the normalized height, and represent the mean
and variance of its distribution by p. and u 2• The experiment is per-
formed by choosing an initial height for the first test, say k o• This
should be chosen near the anticipated mean. The other testing levels
are determined so that the values of the normalized height yare equally
spaced. If d is the preliminary estimate of a, and if Yo = log h o, then the
actual testing heights are obtained by putting log h = yo± d, Yo ± 2d,
yo±3d, ... , and solving for h. The heights will then be so spaced
that the transformed variate is equally spaced with spacing equal to
its anticipated standard deviation. All computations are done in terms
of y.
In any experiment the total number of successes will be approxi-
mately equal to the total number of failures. In fact, the number of
failures at any level cannot differ by more than one from the number of
successes at the next higher level. For estimating p. and a only the suc-
cesses or only the failures are used, depending on which has the smaller
total. In the example shown in Figure 1 there are fewer failures than
successes so the failures would be used. We shall let N denote the
smaller total and let no, nl, n2, • . • nk denote the frequencies at each
level for this less frequent event where no corresponds to the lowest level
and nk the highest level on which the event occurs. We have then
'1:.n.=N.
The estimates of II- and (1 are based on the first two moments of the
y values using the frequencies n •. But since the y values are equally
114 AMERICAN STATISTICAL ASSOCIATION

spaced, the moments are more easily computed in terms of the two
sums
A = 2: in.
B = 2: i 2n •.
In this notation, the estimate of J.I., say m, is

m = y' + d(AN-2
+~) (1)

where y' is the normalized height corresponding to the lowest level on


which the less frequent event occurs. The plus sign is used when the
analysis is based on the failures, and the minus sign when it is based
on the successes. .
The sample standard deviation is
N B - A2
8 = 1.620d ( N2 + .029) (2)

and this, of course, is the estimate of CT. This is a curious estimate in that
while it is a linear function of (NB-A2)/N2, it gives the estimate of
the standard deviation, not the square of the standard deviation. The
formula is an approximate one which is quite accurate when (NB
- A 2)/ N2 is larger than 0.3 but breaks down rapidly when (NB - A 2) I N2
becomes less than 0.3. In the latter instance the formula cannot be
used and the more elaborate calculation described in Appendix B must
be employed.
The example of Figure 1 will illustrate the use of the formulas. Here
the y values used were 2, 1.7, 1.4, 1.1,0.8; the level of the first test yo
being 2, and d being 0.3. Among the sixty tests there were 31 explosions
and 29 failures, hence the latter are used to estimate the parameters.
The failures appear on three levels (0.8, 1.1, 1.4) with frequencies no
=2, nl=18, n2=9. We have then N=29, A=36, B=54, so that the
mean is

m = 0.8 + 0.3 ( -36 + -1) = 1.32


29 2
and the standard deviation is

8 = (1.620)(0.3) ( -270 + .029) = .17.


841
OBTAINING AND ANALYZiNG SENSITIVITY DATA 115
The sample was actually drawn from a normal population with p. = 1.3
and a = 0.2 using Mahalanobis' [9] table of random normal deviates.
The mean and standard deviation of the sixty observations were 1.312
and .158 so that it was a fairly representative sample.
Percentage points would be estimated by m+ks where k is chosen
from tables of the normal deviate to give the desired percentage. Thus
in the example, the 5 per cent point is estimated by
1.32 - (1.645)(.17) = 1.04.
If the y values are thought of as logarithms to base ten of actual
heights in inches in an explosives experiment, the antilogarithms of
estimated percentage points would be estimates of the corresponding
points for the distribution of h. Thus the median (not mean) value of
h is estimated by
antilog; 1.32 = 20.9 inches
and the 5 per cent height by
antilog 1.04 = 11 inches.
The antilogarithm of 8 does not estimate the standard deviation for
h, however, and any computation which involves the standard devia-
tion (estimates of percentage points, confidence limits) must be done
in terms of the normalized height, and only the final result transformed
to actual heights.
CONFIDENCE INTERVALS

Ordinarily the standard deviation of a sample mean, m, is given by


um=u/y'N where a is the population standard deviation and N the
sample size. In the present case this expression must be multiplied by
a factor which we shall call G, so that the formula for the standard
error of the mean is
um = Gu/VN (3)
and G depends on the ratio d/ a and on the position of the mean relative
to the testing levels. G is plotted in Figure 2 as a function of d] a,
The position of the mean relative to the testing levels does not affect
G unless the interval d is large; the solid branch of the curve gives the
value of G when the mean falls on one of the testing levels, while the
dashed branch gives the value when the mean falls midway between
two levels. Curves for other positions of the mean would fall between
the two branches.
116 AMERICAN STATISTICAL ASSOCIATION

'0

R
-
J I I
FIOUR& a
" .
R

&

o
o a , 5

In practice (1' is not known and s must be used in (3) to obtain an


estimate say, Sm, of O'm. In the illustrative example with s = .17, we have
d/s=1.S so that G is about 1.12. The estimate of O'm is therefore
sm = (.17)(1.12)/v'29 = .035.
A confidence interval for m may now be estimated by m±ksm • Thus a
95 per cent confidence interval is
1.32 ± (1.96)(.035) or 1.25 to 1.39
using large sample theory. For moderate values of N, it might be
preferable to use the value of k given by the t distribution for N-l
degrees of freedom, but it iii likely that this is a minor matter relative
to the error of using large sample theory for moderate values of N.
Again assuming the confidence interval refers to the logarithm of an
actual height, it gives rise to an asymmetric 95 per cent confidence
interval (IS to 25 inches) for the median height.
The standard error of the sample standard deviation, say 0'" is or-
dinarily given by 0'/V2N, but in the present analysis an additional
factor is again required. We shall write
0', = HO'/Vif (4)
where we have incorporated the 1/\1'2 into the extra factor. H is
OBTAINING AND ANALYZING SENSITIVITY DATA 117
plotted in Figure 2 where the solid branch gives the value of H when the
mean falls on a level, while the dashed branch gives the value when the
mean is midway between two levels. When d/(1 is less than two there
will be little error introduced by interpolating linearly between the
two branches for other positions of the mean. Thus if the mean falls
d/4 from a testing level, one may use the value of G midway between
the two branches. For the illustrative example with d/s = 1.8, we find
H to be about 1.24 so that the estimate of (1, is

8. = (1.24)(.17)/v'29 = .039.

The estimate s, would be used to estimate the standard error of a per-


centage point, m+ks; the estimate would be VSm2+k2S.2. Thus in the
example, a 95 per cent confidence interval for the 5 per cent point
would be estimated by 1.04 ± (1.96)V(0.35j2+(1.645)2(.039)2 or .88 to
1.20. We should mention again that the estimation of small or large
percentage points depends strongly on the assumption of normality in
the tails. It can easily happen that a relatively small error in this as-
sumption may far outweigh the sampling error indicated by the con-
fidence interval, especially in the case of very extreme percentages, say
1 per cent or 0.1 per cent.

CHOICE OF TESTING INTERVAL

The curves in Figure 2 have been extended beyond d=2(1 in order


to show what happens to the measures of precision for larger intervals.
Curve G shows that the precision of the mean steadily decreases as d
increases. The two branches of H show that there is an optimum spacing
for estimating the standard deviation depending on the position of the
mean relative to the testing levels. Since the mean is usually unknown,
this information is of little practical value.
Curve G indicates that the interval should be quite small for maxi-
mum precision in the mean, but in practice this is not true for several
reasons. In the first place the curves are for expected values and essen-
tially assume infinite sample sizes, and in fact very large samples are
required to get good estimates of the mean for a very small interval.
The estimate may be biased appreciably toward the initial testing level
unless the sample is very large. Secondly, a small interval may cause
one to waste observations unless a good choice for the initial level is
made. If a poor choice is made, many observations must be spent get-
ting from that level to the region of the mean. And finally, since (1 is
usually unknown, the precision of the mean must actually be measured
118 AMERICAN STATISTICAL ASSOCIATION

by 8, and the accuracy of 8 becomes poor for very small intervals as


shown by curve H.
All these considerations indicate that the interval should be within
the range of about 0.5u to 2u, and experiments with the method support
this conclusion.

APPENDIX A

If y is normally distributed with mean Jj and variance u2, and if tests


are made at
y. = Yo ± id i = 0, 1,2, ... (5)
where yo is the level of the initial test, then there will be, say, n. suc-
cesses and m failures at Yi, and the distribution of these latter variates
is
00

P(n, m/ Yo) = K II p."'qr'


i--oo
(6)

where
IIi 1 -1/2(l-u)'
p. =
f -00
- - e ", dt = 1 - q.
v'21ru
(7)

and where K is not a function of Jj and u2•


The estimation of Jj and u2 is based on the principle of maximum like-
lihood. We shall not maximize (6) directly, however, because a material
simplification in the analysis can be made by neglecting a small part of
the information in the sample. It is clear that
I n. - m.-d = 0 or 1
so that either one of the sets (n.) or (mi) contain practically all the
information in the sample. If N = ~n. and M = ~m., and assuming
N~M, we may write (2) in the form

P(n, ml Yo, M - N) = K'II (Piq.-1)'" (8)



and this is the expression which will be maximized. Even if M - N is
not small, only a small amount of information is being neglected, be-
cause in this instance the initial level will have been poorly chosen and
these neglected observations will have been spent in getting from Yo
to the region of the mean; they will obviously contribute little to the
more precise location of the mean.
OBTAINING AND ANALYZING SENSITIVITY DATA 119
On putting the derivatives of (8) with respect to p. and fr equal to
zero we have the relations

"£..J ni (Zi-l
- - -Zi) = 0 (9)
q.-l p.
"£..J n. (X.-1Z.-1 - -x.z.) = 0 (10)
q.-l Pi
where z. represents the ordinate of the distribution of y at Y. and
x.= (y.-p.)/u. The expected values of the left hand sides of these
two expressions are readily found to be zero on substituting E(n.) for
n•. E(n.) may be determined from the relation
E(n.+l) E(n.)
q.
=--.
p.
(11)

If we let
Wo =1
W, =
i-1 q
IT....!.- i>O
j~O pj
, pj
=II
;_-1 qj
i<O
then it follows that
E(n,) = Nw. / i: w•.
-co
(12)

The maximum likelihood estimates of p. and a are the roots, say A


and 8-, of equations (9) and (10). While there is no simple closed ex-
pression for these roots, it turns out that they can be very closely
approximated when d<2u. The function
z(x) z(x d/u) +
a(u) = - - - - - - -
q(x) p(x d/u) +
where u=x+d/2u, is nearly linear in u when d <2u. This is illustrated
in Figure 3. Similarly

(j(u)
XZ(x)
=- -- (x + d/u)z(x + d/u)
-:....-_~-:.-_..:..-.:.
q(x) p(x + diu)
is nearly quadratic in u as is indicated by the graph of its first deriva-
tive in Figure 3, where r=djrr. We may conclude therefore that the
120 AMERICAN STATISTICAL ASSOCIATION
I

-1- _.
J I
FlOURS )
-
.(u) , 'Iu)

8
1

I
) 6
()
11- c
~ "e "\.
"
" 'l- .1 I. ~ "\.

2 ~
" "-
~
" OJ.
< "
U
2
I I

_I I
o
o , u
0
o 2 , u

estimates are essentially determined by the first two moments of the


Y. using the n. as weights.
H we let

(13)

we find
(14)

and

(15)

The expression on the right of (15) is nearly linear in fT when d <2fT,


and its linear approximation was used to determine the estimate of a
given in equation (2). The function is plotted in Figure 4; the solid
branch represents the function when the mean falls at one of the Y.,
and the dashed branch when the mean falls midway between two levels.
The two branches diverge rapidly as d becomes larger than 2fT.
The variance and covariances of Po and tt are determined from the
OBTAINING AND ANALYZING SENSITIVITY DATA 121
second derivatives of L=log P where P is defined by (8). The ex-
pected values of the derivatives are readily found to be

(16)

(17)

N
= ---. (18)
0"2H2

Expression (17) does not vanish unless the mean falls on a level or
midway between two levels. However, we have regarded the covariance
as being negligible for all practical purposes. It gives rise to a maxi-
mum correlation between p. and a- of the order of .0002 when d = 0", and
.02 when d=20". We have then
(19)
approximately, where G and H are defined in (16) and (I8). These are
the functions plotted in Figure 2.

H--H-++++++-+-+-H-I I I I I I I I I I I I
FIGURE 4

o
o 1 2 5
122 AMERICAN STATISTICAL ASSOCIATION

It is not possible to make a very satisfactory comparison between


this method and the ordinary probit method, but the following compu-
tations provide some indication of the relative efficiencies. Suppose 2N
individuals are divided into five equal groups and tested at y = 0,
± a, ± 2u. Bartlett [8], for example, shows that the variance of It for
the probit analysis is about 5(.564)u 2/2N, whereas for the up and down
method the variance is u21N which when divided by the former value
gives 71 per cent. When the sample is tested in six equal groups at
y = ±lu, ±ju, ±!O", the ratio becomes 58 per cent. But these compari-
sons are not fair unless there is considerable uncertainty as to the gen-
erallocation of the mean. If the mean can be located to within, say, 0"
of its true position in advance of the experiment, then the efficiency of
the probit method can be much improved by using groups of unequal
size and testing the larger groups at levels thought to be near the mean.

APPENDIX B

When the chosen testing interval is larger than 2u, or when the inter-
vals are of unequal size, it is necessary to solve equations (9) and (10)
for p. and a, The intervals will be of unequal size, for example, when
the normalizing transformation is unknown in advance of the experi-
ment and must be deduced from the results of the experiment itself.
A method of trial and error is probably as good as any other for solving
the equations. One would first choose preliminary estimates, say m and
8, of the roots by using equations (1) and (2) or simply by using guessed
values. These preliminary estimates would be adjusted until the equa-
tions were satisfied to the desired degree of approximation. The left
side of (9) will be positive when the trial value of p. is too small, and
negative when it is too large. The left side of (10) will be positive when
8 < ho, and negative when 8> ho. The equation (9) is relatively insensi-
tive to changes in 8, while the same is true of (10) for changes in m.
In order to facilitate the computations, the accompanying tables of
zip (Table I) and zlq (Table II) are provided. For negative values of
z, p and q are interchanged, that is
z(x) z( -x)
--=
p(x) q(-x)
We shall illustrate the computation using the data of Figure 5. The
normalized heights are .1, .9, 1.5, 1.9 as indicated in the Figure. We
shall number the levels 0, 1, 2, 3 beginning with the lowest level.
Since there are more successes than failures, the latter are used to deter-
OBTAINING AND ANALYZING SENSITIVITY DATA 123
mine the estimates. A preliminary estimate of p. may be obtained by
using the average of the midpoints of the intervals weighted by the
numbers n,; thus we shall put
m1 = 1/29 [2(1.7) + 26(1.2) + (.5)]
~ 1.2.
A rough estimate of a may be determined by observing that the inter-
val 0.9 to 1.5 appears to contain 26/29 or about 90 per cent of the dis-
tribution, hence we may use
1.64581 ~ H1.5 - 0.9) = 0.3
81 = 0.18.
RECORD OF A SAMPLE OF SIXTY TESTS
Normalized Number of
Height %'8 o'e
1.9 3
1.5 "XXXXXXXXXXoxxxx:z:x
" XJ:xxxxxxoxxx " 27 2
.9 0000000000 00000"000000000 00 1 26
.1 o 1
FIGURE 5

In adjusting these estimates one might be tempted to adjust m1 first


by equation (9) and then go to equation (10) and adjust 81 by using a
good estimate of p.. It turns out, however, that the job can be done
much more rapidly by considering both equations together. The fol-
lowing computational form may be used:
(~I
n;
%,"
---
6,)
"',6; (Xi-l~1
- -p,
Xi';)
n, -----
qi-l Pi Q, Qi-I p,

3 2 1.9 3.89 4.17 0 6.96


2 26 1.5 1.67 .00 3.48 .174 -9.05
1 1 .9 -1.67 -2.08 -.174 -3.48 3.48
o .1 -6.11 0
-1-1-\ I 2.09 1.39

Note. that the table is arranged so that the frequencies of either the
zeros or z's will be entered in the table as though they were x's. The
symbol Xi represents (hi-m1)/81 where hi is the height and m1 and 81
are the first approximations to p. and (I. The other computations are
defined by the column headings. Thus the figure 4.17 at the top of the
fifth column is obtained as 2(2.084- .000); 2.084 being read from Table
II at x=1.67, and .000 being the value of zip at x=3.89 as shown by
Table I. The sums, 2.09 and 1.39, of the fifth and eighth columns give
124 AMERICAN STATISTICAL ASSOCIATION

the values of the left hand sides of equations (9) and (10) respectively;
since both sums are positive. We conclude that both ml and 81 are too
small. Using 1n2"",1.3 and 82=.19 we repeat the above calculation:

n;(~' -~) 1I:j" "' (~~~' 1I:j")


II:jll

• "' It• II:j


q... , P.
--
q.
- -----
q.... P.
P.
--
3 2 1.9 3.16 3.12 .01 3.28
3 26 1.6 1.06 -6.86 1.64 .282 -9.76

-,
1 1 .9 -2.11 -2.47 -.093 -6.21 6.21
0 .1 -6.32 0
- - --
6.20 -1.26

TABLE I
VALUES OF lip

., .00
I .01
~1~~ .06 .06 .07 .08 .09

0.0 0.798 0.792 0.785 0.779 0.773 0.766 0.760 0.754 0.748 0.741
0.1 0.735 0.729 0.7~3 0.717 0.711 0.705 0.699 0.693 0.687 0.681
0.2 0.675 0.669 0.663 0.657 0.652 0.646 0.640 0.634 0.629 0.623
0.3 0.617 0.612 0.606 0.600 0.595 0.589 0.584 0.578 0.673 0.567
0.4 0.562 0.656 0.551 0.546 0.540 0.535 0.530 0.525 0.519 0.514
0.5 0.509 0.504 0.499 0.494 0.489 0.484 0.479 0.474 0.469 0.464
0.6 0.459 0.454 0.449 0.445 0.440 0.435 0.430 0.426 0.421 0.417
0.7 0.412 0.407 0.403 0.398 0.394 0.389 0.385 0.381 0.376 0.372
0.8 0.368 0.363 0.359 0.355 0.361 0.346 0.342 0.338 0.334 0.330
0.9 0.326 0.322 0.318 0.314 0.310 0.306 0.303 0.299 0.295 0.291
1.0 0.288 0.284 0.280 0.277 0.273 0.269 0.266 0.262 0.259 0.256
1.1 0.252 0.249 0.245 0.242 0.239 0.235 0.232 0.229 0.226 0.223
1.2 0.219 0.216 0.213 0.210 0.207 0.204 0.201 0.198 0.195 0.193
1.3 0.190 0.187 0.184 0.181 0.179 0.176 0.173 0.171 0.168 0.165
1.4 0.163 0.160 0.158 0.155 0.163 0.150 0.148 0.146 0.143 0.141
1.5 0.139 0.137 0.134 0.132 0.130 0.128 0.126 0.124 0.121 0.119
1.6 0.117 0.115 0.113 0.111 0.110 0.108 0.106 0.104 0.102 0.100
1.7 0.098 0.097 0.095 0.093 0.092 0.090 O.OSS 0.087 0.085 0.083
1.8 0.082 0.080 0.079 0.077 0.076 0.074 0.073 0.072 0.070 0.069
1.9 0.068 0.066 0.065 0.064 0.062 0.061 0.060 0.059 0.058 0.066
2.0 0.055 0.054 0.053 0.052 0.051 0.050 0.049 0.048 0.047 0.046
2.1 0.045 0.044 0.043 0.042 0.041 0.040 0.039 0.038 0.038 0.037
2.2 0.036 0.035 0.034 0.034 0.033 0.032 O.OoH 0.031 0.030 0.029
2.3 0.029 0.028 0.027 0.027 0.026 0.026 0.025 0.024 0.024 0.023
2.4 0.023 0.022 0.022 0.021 0.020 0.020 0.019 0.019 0.019 0.018
2.6 0.018 0.017 0.017 0.016 0.016 0.016 0.016 0.015 0.014 0.014
2.6 0.014 0.013 0.013 0.013 0.012 0.012 0.012 0.011 0.011 0.011
2.7 0.010 0.010 0.010 0.010 0.009 0.009 0.009 0.009 0.008 0.008
2.8 0.008 0.008 0.008 0.007 0.007 0.007 0.007 0.007 0.006 0.006
2.9 0.006 0.006 0.006 0.005 0.005 0.005 0.005 0.005 0.005 0.005
3.0 0.004 0.004 0.004 0.004 0.004 0.004 0.004 0.004 0.003 0.003
3.1 0.003 0.003 0.003 0.003 0.003 0.003 0.003 0.003 0.003 0.002
3.2 0.002 0.002 0.002 0.002 0.002 0.002 0.002 0.002 0.002 0.002
3.3 0.002 0.002 0.002 0.002 0.002 0.001 0.001 0.001 0.001 0.001
3.4 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001
3.6 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001
3.6 0.001 0.001 0.001 0.001 0.001 0.001 0.000
OBTAINING AND ANALYZING SENSITIVITY DATA 125
These results show that the roots are bracketed, and good estimates of
and a may be obtained by interpolation between the sumr, Inter-
I-'
polating between 1.2 and 1.3 using 2.09, 0, - 5.20, we find ma = 1.23,
and similarly 88=.185. By doing two more calculations similar to the
two illustrated above, one would verify the third figures in m and 8
and obtain good estimates for the fourth figures. Here, the results
to three figures are m=1.21 and 8=.187. However, the data do not

TABLE II
VALUES OF ./Q

:t .00 .01 .02 .03


~I_~.~~J~ .07 .08 .09

0.0 0.798 0.804 0.811 0.817 0.824 0.830 0.836 0.843 0.849 0.856
0.1 0.863 0.869 0.876 0.882 0.839 0.896 0.902 0.909 0.916 0.923
0.2 0.929 0.936 0.943 0.950 0.957 0.964 0.970 0.977 0.984 0.991
0.3 0.998 1.005 1.012 1.019 1.026 1.033 1.040 1.047 1.054 1.062
0.4 1.069 1.076 1.083 1.090 1.097 1.105 1.112 1.119 1.126 1.134
0.6 1.141 1.148 1.156 1.163 1.171 1.178 1.185 1.193 1.200 1.207
0.6 1.215 1.222 1.230 1.237 1.246 1.253 1.260 1.268 1.275 1.283
0.7 1.290 1.298 1.306 1.313 1.321 1.329 1.336 1.344 1.352 1.360
0.8 1.367 1.375 1.383 1.391 1.399 1.406 1.414 1.422 1.430 1.438
0.9 1.446 1.454 1.461 1.469 1.477 1.485 1.493 1.501 1.509 1.517
1.0 1.525 1.533 1.541 1.549 1.557 1.565 1.573 1.581 1.590 1.598
1.1 1.606 1.614 1.622 1.630 1.638 1.646 1.655 1.663 1.671 1.679
1.2 1.687 1.696 1.704 1.712 1.720 1.729 1.737 1.745 1.754 1.762
1.3 1.770 1.779 1.787 1.795 1.804 1.812 1.820 1.829 1.838 1.846
1.4 1.854 1.862 1.871 1.879 1.888 1.896 1.905 1.913 1.922 1.930
1.6 1.938 1.947 1.955 1.964 1.972 1.981 1.990 1.998 2.007 2.015
UI 2.024 2.033 2.041 2.050 2.058 2.067 2.076 2.084 2.093 2.102
1.7 2.110 2.119 2.128 2.136 2.145 2.154 2.162 2.171 2.180 2.188
1.8 2.197 2.206 2.215 2.223 2.232 2.241 2.250 2.258 2.267 2.276
1.9 2.285 2.294 2.303 2.311 2.320 2.329 2.338 2.346 2.355 2.364
2.0 2.373 2.381 2.390 2.399 2.408 2.417 2.426 2.435 2.444 2.453
2.1 2.462 2.470 2.479 2.488 2.497 2.506 2.515 2.524 2.533 2.542
2.2 2.551 2.560 2.569 2.578 2.587 2.596 2.605 2.614 2.623 2.632
2.3 2.641 2.650 2.659 2.668 2.677 2.687 2.696 2.705 2.714 2.723
2.4 2.732 2.741 2.750 2.759 2.768 2.777 2.786 2.795 2.805 2.814
2.6 2.823 2.832 2.841 2.850 2.859 2.868 2.878 2.887 2.896 2.905
2.6 2.914 2.923 2.932 2.942 2.951 2.960 2.969 2.978 2.987 2.997
2.7 3.006 3.015 3.024 3.033 3.043 3.052 3.061 3.070 3.079 3.089
2.8 3.098 3.107 3.116 3.126 3.136 3.144 3.153 3.163 3.172 3.181
2.9 3.190 3.200 3.209 3.218 3.227 3.237 3.246 3.255 3.265 3.274
3.0 3.283 3.292 3.302 3.311 3.320 3.330 3.339 3.348 3.358 3.367
3.1 3.376 3.386 3.395 3.404 3.413 3.423 3.432 3.441 3.451 3.460
3.2 3.470 3.479 3.488 3.498 3.507 3.516 3.526 3.535 3.544 3.654
3.3 3.563 3.573 3.582 3.591 3.601 3.610 3.620 3.629 3.638 3.648
3.4 3.657 3.667 3.676 3.685 3.695 3.704 3.714 3.723 3.732 3.742
3.5 3.751 3.761 3.770 3.780 3.789 3.799 3.808 3.817 3.827 3.836
3.6 3.846 3.855 3.865 3.874 3.884 3.893 3.902 3.912 3.992 3.931
3.7 3.940 3.950 3.959 3.969 3.978 3.988 3.997 4.007 4.016 4.026
3.8 4.035 4.045 4.054 4.064 4.073 4.083 4.092 4.102 4.111 4.121
3.9 4.130 4.140 4.149 4.159 4.169 4.178 4.197 4.206 4.216
4.0 4.226 4.235 4.245 4.254 4.264 4.273 4.283 4.292 4.302 4.312
14.188
126 AMERICAN STATISTICAL ASSOCIATION

warrant any more accuracy in the roots than is given by ma and 8a,
and one would not do the two extra computations. The results in Figure
5 were obtained by using the same set of observations (with mean 1.312
and standard deviation .158) as was used to obtain the results of
Figure 1.
REFERENCES
[1) C. I. Bliss, Annals of Applied Biology, XXII, 1935, p, 134.
(2) R. A. Fisher, iu«, p, 149.
[3] J. O. Irwin, Journal of the Royal Statistical Society (Supplement), IV, 1937, p,
1.
(4) J. O. Irwin and E. A. Cheeseman, ibid., VI. 1939, p. 174.
(5) R. A. Fisher and F. Yates, Statistical Tables, 1938, Oliver and Boyd, London.
(6) F. Garwood, Biometrika, XXIII, 1941, p. 46.
(7) M. S. Bartlett, Biometrics III, 1947, p, 39.
[8] M. S. Bartlett, Journal of the Royal Statistical Society (Supplement), VIII,
1946, p. 113.
[9] P. C. Mahalanobis, S. S. Bose, P. R. Roy, and S. K. Banerji, Sankhyll I,
1934, p. 289.

You might also like