Logical Design of An Electronic Computing Instrument
Logical Design of An Electronic Computing Instrument
Arthur W. Burks
Herman H. Goldstine
John von Neumann
Arthur W. Burks
Herman H. Goldstine
John von Neumann
The Institute for Advanced Study
2 September 1947
(a) For equations of parabolic or hyperbolic type in two independent variables the integration process is
essentially a double induction. To find the values of the dependent variables at time t + ∆t one
(b) For total differential equations the memory requirements are clearly similar to, but smaller than, those
discussed in (a) above.
(c) Problems that are solved by iterative procedures such as systems of linear equations or elliptic partial
differential equations, treated by relaxation techniques, may he expected to require quite extensive
memory capacity. The memory requirement for such problems is apparently much greater than for
those problems in (a) above in which one needs only to store information corresponding to the
instantaneous value of one variable [t in (a) above], while now entire solutions (covering all values of
all variables) must be stored. This apparent discrepancy in magnitudes can, however, be somewhat
overcome by the use of techniques which permit the use of much coarser integration meshes in this
case, than in the cases under (a).
2.3. It is reasonable at this time to build a machine that can conveniently handle problems several orders of
magnitude more complex than are now handled by existing machines, electronic or electro-mechanical. We
consequently plan on a fully automatic electronic storage facility of about 4,000 numbers of 40 binary digits
each. This corresponds to a precision of 2–40 ~ 0.9 × 10–12, i.e. of about 12 decimals. We believe that this
memory capacity exceeds the capacities required for most problems that one deals with at present by a
factor of about 10. The precision is also safely higher than what is required for the great majority of present
day problems. In addition, we propose that we have a subsidiary memory of much larger capacity, which is
also fully automatic, on some medium such as magnetic wire or tape.
5.2. In a discussion of the arithmetical organs of a computing machine one is naturally led to a consideration
of the number system to be adopted. In spite of the longstanding tradition of building digital machines in the
decimal system, we feel strongly in favor of the binary system for our device. Our fundamental unit of
memory is naturally adapted to the binary system since we do not attempt to measure gradations of charge
n
an = ∑ v p (v) − p (v + 1)
n n
v=0
∑ v p (v) − p (v + 1)
n n =1
v =0
n
an = ∑ p (v). n
v =1
1 − pn − v ( v )
pn (v ) = pn −1 (v ) + if v ≤ n.
2 v +1
Indeed, pn(v) is the probability that the sum of two n-digit numbers contains a carry sequence of length ≥ v.
This probability obtains by adding the probabilities of two mutually exclusive alternatives: First: Either the
n – 1 first digits of the two numbers by themselves contain a carry sequence of length ≥ v. This has the
probability Pn–1 (v). Second: The n – 1 first digits of the two numbers by themselves do not contain a carry
sequence of length ≥ v. In this case any carry sequence of length ≥ v in the total numbers (of length n) must
end with the last digits of the total sequence. Hence these must form the combination 1, 1. The next v – 1
digits must propagate the carry, hence each of these must form the combination 1, 0 or 0, 1. (The
combinations 1, 1 and 0, 0 do not propagate a carry.) The probability of the combination 1, 1 is ¼, that one
of the alternative combinations 1, 0 or 0, 1 is ½. The total probability of this sequence is therefore ¼(½)v – 1
= (½)v + 1. The remaining n – v digits must not contain a carry sequence of length ≥ v. This has the
probability 1 – pn–v(v). Thus the probability of the second case is [1 – Pn – v(v)]/2v + l Combining these two
cases, the desired relation
1 − pn − v ( v )
pn (v ) = pn −1 (v ) +
2 v +1
∑ pi (v ) − pi −1 (v ) = pn (v )
i =v
is not in excess of (n – v + 1)/2v + 1 since there are n – v + 1 terms in the sum; since, moreover, each pn(v) is
a probability, it is not greater than 1. Hence we have
∑ ∑ ∑ ∑2
n n
an = pn (v ) + pn (v ) ≤ 1+ v +1
= K −1+ .
v =1 v= K v +1 v= K 2K
This last expression is clearly linear in n in the interval 2K ≤ n ≤ 2K + 1, and it is =K for n = 2K and =K + 1 for
n = 2K + 1, i.e. it is = 2log n at both ends of this interval. Since the function 2log n is everywhere concave
from below, it follows that our expression is ≤ 2log n throughout this interval. Thus an ≤ 2log n. This holds
for all K, i.e. for all n, and it is the inequality which we wanted to prove.
For our case n = 40 we have an log240 ∼ 5.3, i.e. an average length of about 5 for the longest carry
sequence. (The actual value of a40 is 4.62.)
5.7. Having discussed the addition, we can now go on to the subtraction. It is convenient to discuss at this
point our treatment of negative numbers, and in order to do that right, it is desirable to make some
observations about the treatment of numbers in general.
Our numbers are 40 digit aggregates, the left-most digit being the sign digit, and the other digits genuine
binary digits, with positional values 2–1, 2–2, …, 2–39 (going from left to right). Our accumulator will,
however, treat the sign digit, too, as a binary digit with the positional value 20—at least when it functions as
an adder. For numbers between 0 and 1 this is clearly all right: The left-most digit will then be 0, and if 0 at
this place is taken to represent a + sign, then the number is correctly expressed with its sign and 39 binary
digits.
Let us now consider one or more unrestricted 40 binary digit numbers. The accumulator will add them,
with the digit-adding and the carrying mechanisms functioning normally and identically in all 40 positions.
There is one reservation, however: If a carry originates in the left-most position, then it has nowhere to go
from there (there being no further positions to the left) and is “lost”. This means, of course, that the addend
and the augend, both numbers between 0 and 2, produced a sum exceeding 2, and the accumulator, being
unable to express a digit with a positional value 21, which would now be necessary, omitted 2. That is the
sum was formed correctly, excepting a possible error 2. If several such additions are performed in
succession, then the ultimate error may be any integer multiple of 2. That is the accumulator is an adder
which allows errors that are integer multiples of 2—it is an adder modulo 2.
It should be noted that our convention of placing the binary point immediately to the right of the left-
most digit has nothing to do with the structure of the adder. In order to make this point clearer we proceed
to discuss the possibilities of positioning the binary point in somewhat more detail.
We begin by enumerating the 40 digits of our numbers (words) from left to right. In doing this we use an
index h = 1, …, 40. Now we might have placed the binary point just as well between digits j and j + 1, j = 0,
… 40. Note, that j =0 corresponds to the position at the extreme left (there is no digit h = j = 0); j = 40
corresponds to the position at the extreme right (there is no position h = j + l = 41); and j = 1 corresponds to
our above choice. Whatever our choice of j, it does not affect the correctness of the accumulator’s addition.
(This is equally true for subtraction, cf. below, but not for multiplication and division, cf. 5.8.) Indeed, we
have merely multiplied all numbers by 2j–1 (as against our previous convention), and such a “change of
scale” has no effect on addition (and subtraction). However, now the accumulator is an adder which allows
errors that are integer multiples of 2j it is an adder modulo 2j. We mention this because it is occasionally
convenient to think in terms of a convention which places the binary point at the right end of the digital
aggregate. Then j = 40, our numbers are integers, and the accumulator is an adder modulo 240. We must
emphasize, however, that all of this, i.e. all attributions of values to j, are purely convention—i.e. it is solely
the mathematician’s interpretation of the functioning of the machine and not a physical feature of the
machine. This convention will necessitate measures that have to be made effective by actual physical
features of the machine—i.e. the convention will become a physical and engineering reality only when we
come to the organs of multiplication.
Originally published in Papers of John von Neumann on 10
Computing and Computer Theory, pp. 97–142.
Copyright 1987 The MIT Press.
We will use the convention j = 1, i.e. our numbers lie in 0 and 2 and the accumulator adds modulo 2.
This being so, these numbers between 0 and 2 can be used to represent all numbers modulo 2. Any real
number x agrees modulo 2 with one and only one number x between 0 and 2—or, to be quite precise: 0 ≤
x < 2. Since our addition functions modulo 2, we see that the accumulator may be used to represent and to
add numbers modulo 2.
This determines the representation of negative numbers: If x < 0, then we have to find the unique integer
multiple of 2, 2s (s = 1, 2, …) such that 0 ≤ x < 2 for x = x + 2s (i.e. – 2s ≤ x < 2(1 – s)), and represent x
by the digitalization of x .
In this way, however, the sign digit character of the left-most digit is lost: It can be 0 or 1 for both x ≥ 0
and x < 0, hence 0 in the left-most position can no longer be associated with the + sign of x. This may seem
a bad deficiency of the system, but it is easy to remedy—at least to an extent which suffices for our
purposes. This is done as follows:
We will usually work with numbers x between –1 and 1—or, to be quite precise: –1 ≤ x < 1. Now the x
with 0 ≤ x < 2, which differs from x by an integer multiple of 2, behaves as follows: If x ≥ 0, then 0 ≤ x < 1,
hence x = x, and so 0 ≤ x < 1, the left-most digit of x is 0. If x < 0, then –1 ≤ x < 0, hence x = x + 2, and
so 1 ≤ x < 2, the left-most digit of x is 1. Thus the left-most digit (of x ) is now a precise equivalent of the
sign (of x): 0 corresponds to + and 1 to –.
Summing up:
The accumulator may be taken to represent all real numbers modulo 2, and it adds them modulo 2. If x
lies between –1 and 1 (precisely: –1 ≤ x < 1)—as it will in almost all of our uses of the machine—then the
left-most digit represents the sign: 0 is + and 1 is –.
Consider now a negative number x with –1 ≤ x < 0. Put x = – y, 0 < y ≤ 1. Then we digitalize x by
representing it as x + 2 = 2 – y = 1 + (1 – y). That is, the left-most (sign) digit of x = – y is, as it should be,
1; and the remaining 39 digits are those of the complement of y = –x = |x|, i.e. those of 1 – y. Thus we have
been led to the familiar representation of negative numbers by complementation.
The connection between the digits of x and those of – x is now easily formulated, for any x ≥≤ 0. Indeed,
– x is equivalent to
2
> C
39
2 − x = (21 − 2 −39 ) − x + 2 −39 = ∑
i =0
−i
− x + 2 −39 .
%K = 0 for ξ 40− k = 0,
i.e. 2 pk = pk −1 + y k , yk &K (1)
' =y for ξ 40− k =1.
That is, we do nothing or add y, according to whether ξ40–k = 0 or 1. We can then form pk by halving 2pk.
Note that the addition of (1) produces no carry beyond the 20 position, i.e. the sign digit: 0 ≤ ph < 1 is
true for h = 0, and if it is true for h = k – 1, then (1) extends it to h = k also, since 0 ≤ yk < 1. Hence the sum
in (1) is ≥ 0 and < 2, and no carries beyond the 20 position arise.
Hence pk obtains from 2pk by a simple right shift, which is combined with filling in the sign digit (that is
freed by this shift) with a 0. This right shift is effected by an electronic shifter that is part of Ac.
Now
> C
39
p39 = 2 −1[[2 −1[2 −1 K (2 −1 ξ 39 y + ξ 38 y )K + ξ 2 y ]] = ∑2 −i
ξ i y = xy.
i =1
Thus this process produces the product xy, as desired. Note that this xy is the exact product of x and y.
Since x and y are 39 digit binaries, their exact product xy is a 78 digit binary (we disregard the sign digit
throughout). However, Ac will only hold 39 of these. These are clearly the left 39 digits of xy. The right 39
digits of xy are dropped from Ac one by one in the course of the 39 steps, or to be more specific, of the 39
right shifts. We will see later that these right 39 digits of xy should and will also be conserved (cf. the end of
this section and the end of S.12, as well as 6.6.3). The left 39 digits, which remain in Ac, should also be
rounded off, but we will not discuss this matter here (cf. loc. cit. above and 9.9, Part II).
To complete the general picture of our multiplication technique we must consider how we sense the
respective digits of our multiplier. There are two schemes which come to one’s mind in this connection.
One is to have a gate tube associated with each flip-flop of AR in such a fashion that this gate is open if a
digit is 1 and closed if it is null. We would then need a 39-stage counter to act as a switch which would
successively stimulate these gate tubes to react. A more efficient scheme is to build into AR a shifter circuit
which enables AR to be shifted one stage to the right each time Ac is shifted and to sense the value of the
digit in the right-most flip-flop of AR. The shifter itself requires one gate tube per stage. We need in
addition a counter to count out the 39 steps of the multiplication, but this can be achieved by a six stage
binary counter. Thus the latter is more economical of tubes and has one additional virtue from our point of
view which we discuss in the next paragraph.
The choice of 40 digits to a word (including the sign) is probably adequate for most computational
problems but situations certainly might arise when we desire higher precision, i.e. words of greater length.
A trivial illustration of this would be the computation of π to more places than are now known (about 700
decimals, i.e. about 2,300 binaries). More important instances are the solutions of N linear equations in N
variables for large values of N. The extra precision becomes probably necessary when N exceeds a limit
somewhere between 20 and 40. A justification of this estimate has to be based on a detailed theory of
Originally published in Papers of John von Neumann on 12
Computing and Computer Theory, pp. 97–142.
Copyright 1987 The MIT Press.
numerical matrix inversion which will be given in a subsequent report. It is therefore desirable to be able to
handle numbers of 39k digits and signs by means of program instructions. One way to achieve this end is to
use k words to represent a 39k digit number with signs. (In this way 39 digits in each 40 digit word are used,
but all sign digits, excepting the first one, are apparently wasted, cf. however the treatment of double
precision numbers in Chapter 9, Part II.) It is, of course, necessary in this case to instruct the machine to
perform the elementary operations of arithmetic in a manner that conforms with this interpretation of k-word
complexes as single numbers. (Cf. 9.8-9.10, Part II.) In order to be able to treat numbers in this manner, it is
desirable to keep not 39 digits in a product, but 78; this is discussed in more detail in 6.6.3 below. To
accomplish this end (conserving 78 product digits) we connect, via our shifter circuit, the right-most digit of
Ac with the left-most non-sign digit of AR. Thus, when in the process of multiplication a shift is ordered,
the last digit of Ac is transferred into the place in AR made vacant when the multiplier was shifted.
5.9. To conclude our discussion of the multiplication of positive numbers, we note this:
As described thus far, the multiplier forms the 78 digit product, xy, for a 39 digit multiplier x and a 39
digit multiplicand y. We assumed x ≥ 0, y ≥ 0 and therefore had xy ≥ 0, and we will only depart from these
assumptions in 5.10. In addition to these, however, we also assumed x < 1, y < 1, i.e. that x, y have their
binary points both immediately right of the sign digit, which implied the same for xy. One might question
the necessity of these additional assumptions.
Prima facie they may seem mere conventions, which affect only the mathematician’s interpretation of
the functioning of the machine, and not a physics feature of the machine. (Cf. the corresponding situation in
addition and subtraction, in 5.7.) Indeed, if x had its binary point between digits j and j + 1 from the left (cf.
the discussion of 5.7 dealing with this j; it also applies to k below), and y between k and k + 1, then our
above method of multiplication would still give the correct result xy, provided that the position of the binary
point in xy is appropriately assigned. Specifically: Let the binary point of xy be between digits l and l + 1. x
has the binary point between digits j and j + 1, and its sign digit is 0, hence its range is 0 ≤ x < 2j–1. Similarly
y has the range 0 ≤ y < 2k–1, and xy has the range 0 ≤ xy < 2l–1. Now the ranges of x and y imply that the
range of xy is necessarily 0 ≤ y < 2j–12k–1. Hence l = j + k – 1. Thus it might seem that our actual positioning
of the binary point—immediately right of the sign digit, i.e. j = k = 1—is still a mere convention.
It is therefore important to realize that this is not so: The choices of j and k actually correspond to very
real, physical, engineering decisions. The reason for this is as follows: It is desirable to base the running of
the machine on a sole, consistent mathematical interpretation. It is therefore desirable that all arithmetical
operations be performed with an identically conceived positioning of the binary point in Ac. Applying this
principle to x and y gives j = k. Hence the position of the binary point for xy is given by j + k – 1= 2j – 1. If
this is to be the same as for x, and y, then 2j – 1 = j, i.e. j = 1 ensues—that is our above positioning of the
binary point immediately right of the sign digit.
There is one possible escape: To place into Ac not the left 39 digits of xy (not counting the sign digit 0),
but the digits j to j + 38 from the left. Indeed, in this way the position of the binary point of xy will be (2j –
1) – (j – 1) = j, the same as for x and y.
This procedure means that we drop the left j – 1 and right 40 + j digits of xy and hold the middle 39 in
Ac. Note that positioning of the binary point means that x < 2j–1, y < 2j–1 and xy can only be used if xy < 2j–1.
Now the assumptions secure only xy < 22j–2. Hence xy must be 2 j–1 times smaller than it might be. This is
just the thing which would be secured by the vanishing of the left j – 1 digits that we had to drop from Ac,
as shown above.
If we wanted to use such a procedure, with those dropped left j – 1 digits really existing, i.e. with j ≠ 1,
then we would have to make physical arrangements for their conservation elsewhere. Also the general
mathematical planning for the machine would be definitely complicated, due to the physical fact that Ac
now holds a rather arbitrarily picked middle stretch of 39 digits from among the 78 digits of xy.
Alternatively, we might fail to make such arrangements, but this would necessitate to see to it in the
mathematical planning of each problem, that all products turn out to be 2 j–1 times smaller than their a priori
maxima. Such an observance is not at all impossible; indeed similar things are unavoidable for the other
operations. [For example, with a factor 2 in addition (of positives) or subtraction (of opposite sign
quantities). Cf. also the remarks in the first part of 5.12, dealing with keeping “within range”.] However, it
involves a loss of significant digits, and the choice j = 1 makes it unnecessary in multiplication.
We will therefore make our choice j = 1, i.e. the positioning of the binary point immediately right of the
sign digit, binding for all that follows.
Originally published in Papers of John von Neumann on 13
Computing and Computer Theory, pp. 97–142.
Copyright 1987 The MIT Press.
5.10. We now pass to the case where the multiplier x and the multiplicand y may have either sign + or –, i.e.
any combination of these signs.
It would not do simply to extend the method of 5.8 to include the sign digits of x and y also. Indeed, we
assume – 1 ≤ x < 1, – 1 ≤ y < 1,and the multiplication procedure in question is definitely based on the ≥ 0
interpretations of x and y. Hence if x < 0, then it is really using x + 2, and if y < 0, then it is really using y +
2. Hence for x < 0, y ≥ 0 it forms
(x + 2)y = xy + 2y;
x(y + 2) = xy + 2x;
(x + 2)(y + 2) = xy + 2x + 2y + 4,
or since things may be taken modulo 2, xy + 2x + 2y. Hence correction terms –2y, –2x would be needed for
x < 0, y < 0, respectively (either or both).
This would be a possible procedure, but there is one difficulty: As xy is formed, the 39 digits of the
multiplier x are gradually lost from AR, to be replaced by the right 39 digits of xy. (Cf. the discussion at the
end of 5.8.) Unless we are willing to build an additional 40 stage register to hold x, therefore, x will not be
available at the end of the multiplication. Hence we cannot use it in the correction 2x of xy, which becomes
necessary for y < 0.
Thus the case x < 0 can be handled along the above lines, but not the case y < 0.
It is nevertheless possible to develop an adequate procedure, and we now proceed to do this.
Throughout this procedure we will maintain the assumptions – 1 ≤ x < 1, – 1 ≤ y < 1. We proceed in several
successive steps.
First: Assume that the corrections necessitated by the possibility of y < 0 have been taken care of. We
permit therefore y ≥≤ 0. We will consider the corrections necessitated by the possibility of x < 0.
Let us disregard the sign digit of x, which is 1, i.e. replace it by 0. Then x goes over into x′ = x – 1 and as
– 1 ≤ x < 0, this x′ will actually behave like (x – 1) + 2 = x + 1. Hence our multiplication procedure will
produce x′y = (x + 1)y = xy + y, and therefore a correction – y is needed at the end. (Note that we did not use
the sign digit of x in the conventional way. Had we done so, then a correction – 2y would have been
necessary, as seen above.)
We see therefore: Consider x ≥≤ 0. Perform first all necessary steps for forming x′y(y ≥≤ 0), without yet
reaching the sign digit of x (i.e. treating x as if it were ≥ 0). When the time arrives at which the digit ξ0 of x
has to become effective—i.e. immediately after ξ1 became effective, after 39 shifts (cf. the discussion near
the end of 5.8—at which time Ac contains, say, p (this corresponds to the p39 of 5.8) then form
K%& =p if ξ 0 =0
p
K' = p − y if ξ 0 =1
.
This p is xy. (Note the difference between this last step, forming p , and the 39 preceding steps in 5 8,
forming p1, p2, …, p39.)
Second: Having disposed of the possibility x < 0, we may now assume x ≥ 0. With this assumption we
have to treat all y ≥≤ 0. Since y ≥ 0 brings us back entirely to the familiar case of 5.8, we need to consider
the case y < 0 only.
Let y′ be the number that obtains by disregarding the sign digit of y′ which is 1, i.e. by replacing it by 0.
Again y′ acts not like y – 1, but like (y – 1) + 2 = y + 1. Hence the multiplication procedure of 5.8 will
produce xy′ = x(y + 1) = xy + x, and therefore a correction x is needed. (Note that, quite similarly to what we
%K =1 for ξ 40− k = 0
2 pk = pk −1 + y k′ ′ , y k′ , &K . ( 2)
' = y ′ for ξ 40− k = 1
That is, we add 1 (y’s sign digit) or y′ (y without its sign digit), according to whether ξ40–k = 0 or 1. Then pk
should obtain from 2pk again by halving.
Now the addition of (2) produces no carries beyond the 20 position, as we asserted earlier, for the same
reason as the addition of (1) in 5.8. We can argue in the same way as there: 0 ≤ ph < 1 is true for h = 0, and
if it is true for h = k – 1, then ( 1) extends it to h = k also, since 0 ≤ y′k ≤ 1. Hence the sum in (2) is ≥ 0 and
< 2, and no carries beyond the 20 position arise.
Fifth: In the three last observations we assumed y < 0. Let us now restore the full generality of y ≥≤ 0.
We can then describe the equations (1) of 5.8 (valid for y ≥ 0) and (2) above (valid for y < 0) by a single
formula,
%K for ξ 40− k = 0
&K
= y ’s sign digit
2 pk = pk −1 + y k′′, y k′′ . (3)
' = y without its sign digit for ξ 40− k = 1
rk = 2rk −1 ⊕ y ,
%K is − if the signs of rk - 1 and y do agree;
⊕&
(4)
K' is + if the signs of rk - 1 and y do not agree.
Let us now see what carries may originate in this procedure. We can argue as follows: |rh|, < |y| is true
for h = 0 (|r0| = |x| < |y|), and if it is true for h = k – 1, then (4) extends it to h = k also, since rk–1 and ⊕ y
have opposite signs. The last point may be elaborated a little further: because of the opposite signs
Hence we have always |rk| < |y|, and therefore a fortiori |rk| < 1, i.e. – 1 < rk < 1.
rk = 2rk −1 + (1 − 2ξ ’k ) y ,
i.e.
2 − k rk = 2 − ( k −1) rk −1 + (2 − k − 2 − ( k −1) ξ ’k ) y.
K%&
2 − n rn = x + (1 − 2 − n ) −
n
∑2 − ( k −1) i
ξk
K() y,
K' k +1 K*
i.e.
n
∑ 2
x = −1 +
k +1
− ( k −1)
ξ ’k + 2 − n y + 2 − n rn .
∑
n
This makes it clear, that z = −1 + 2 − ( k −1) ξ ’k + 2 − n corresponds to true quotient z = x/y and 2–nrn,
k =1
with an absolute value <2–n|y| ≤ 2–n, to the remainder. Hence, if we disregard the term –1 for a moment ξ′i,
ξ′2, …, ξ′n, 1are the n + 1 first digits of what may be used as a true quotient, the sign digit being part of this
sequence.
Fifth: If we do not wish to get involved in more complicated round-off procedures which exceed the
immediate capacity of the only available adder Ac, then the above result suggests that we should put n + 1 =
40, n = 39. The ξ′1, …, ξ′39 are then 39 digits of the quotient, including the sign digit, but not including the
right-most digit.
The right-most digit is taken care of by placing a 1 into the right-most stage of Ac.
At this point an additional argument in favor of the procedure that we have adopted here becomes
apparent. The procedure coincides (without a need for any further corrections) with the second round-off
procedure that we discussed in 5.12.
There remains the term –1. Since this applies to the final result, and no right shifts are to follow, carries
which might go beyond the 20 position may be disregarded. Hence this amounts simply to changing the sign
digit of the quotient z : replacing 0 or 1 by 1 or 0 respectively.
This concludes our discussion of the division scheme. We wish, however, to re-emphasize two very
distinctive features which it possesses:
First: This division scheme applies equally for any combinations of signs of divisor and dividend. This
is a characteristic of the non-restoring division schema, but it is not the case for any simple known
multiplication scheme. It will be remembered, in particular, that our multiplication procedure of 5.9 had to
contain special correcting steps for the cases where either or both factors are negative.
Some multiplications (cf. 5.8 and 5.9): Decimal notation (fractional form)
5/8
Binary notation 3/8
Multiplicand ……… 0.101
Multiplier ……… 0.011
___________________________________
0101
0101
0
___________________________________
_____
Product ……… 0.001111 15/64
6. The Control
6.1. It has already been stated that the computer will contain an organ, called the control, which can
automatically execute the orders stored in the Selectrons. Actually, for a reason stated in 6.3, the orders for
this computer are less than half as long as a forty binary digit number, and hence the orders are stored in the
Selectron memory in pairs.
Let us consider the routine that the control performs in directing a computation. The control must know
the location in the Selectron memory of the pair of orders to be executed. It must direct the Selectrons to
transmit this pair of orders to the Selectron register and then to itself. It must then direct the execution of the
operation specified in the first of the two orders. Among these orders we can immediately describe two
major types: An order of the first type begins by causing the transfer of the number, which is stored at a
specified memory location, from the Selectrons to the Selectron register. Next, it causes the arithmetical
unit to perform some arithmetical operations on this number (usually in conjunction with another number
which is already in the arithmetical unit), and to retain the resulting number in the arithmetical unit. The
second type order causes the transfer of the number, which is held in the arithmetical unit, into the Selectron
register, and from there to a specified memory location in the Selectrons. (It may also be that this latter
operation will permit a direct transfer from the arithmetical unit into the Selectrons.) An additional type of
order consists of the transfer orders of 3.5. Further orders control the inputs and the outputs of the machine.
The process described at the beginning of this paragraph must then be repeated with the second order of the
order pair. This entire routine is repealed until the end of the problem.
6.2. It is clear from what has just been stated that the control must have a means of switching to a specified
location in the Selectron memory, for withdrawing both numbers for the computation and pairs of orders.
Since the Selectron memory (as tentatively planned) will hold 212 = 4,096 forty-digit words (a word is either
a number or a pair of orders), a twelve-digit binary number suffices to identify a memory location. Hence a
switching mechanism is required which will, on receiving a twelve-digit binary number, select the
corresponding memory location.
The type of circuit we propose to use for this purpose is known as a decoding or many-one function
table. It has been developed in various forms independently by J. Rajchman and P. Crawford.* It consists of
n flip-flops which register an n digit binary number. It also has a maximum of 2n output wires. The flip-
flops activate a matrix in which the interconnections between input and output wires arc made in such a way
that one and only one of 2n output wires is selected (i.e. has a positive voltage applied to it). These
interconnections may be established by means of resistors or by means of non-linear elements (such as
diodes or rectifiers); all these various methods are under investigation. The Selectron is so designed that
four such function table switches are required, each with a three digit entry and eight (23) outputs. Four sets
of eight wires each are brought out of the Selectron for switching purposes, and a particular location is
selected by making one wire positive with respect to the remainder. Since all forty Selectrons are switched
in parallel, these four sets of wires may be connected directly to the four function table outputs.
6.3. Since most computer operations involve at least one number located in the Selectron memory, it is
reasonable to adopt a code in which twelve binary digits of every order are assigned to the specification of a
Selectron location. In those orders which do not require a number to be taken out of or into the Selectrons
these digit positions will not be used.
Though it has not been definitely decided how many operations will be built into the computer (i.e. how
many different orders the control must be able to understand), it will be seen presently that there will
probably be more than 25 but certainly less than 26. For this reason it is feasible to assign 6 binary digits for
*
ù Rakhrnan's table is described in an RCA Laboratories’ report by Rakhman, Snyder and Rudnick issued in 1943 under the terms of
an OSRD contract OEM-sr-S91. Crawford’s work is discussed in his thesis for the Master’s degree at Massachusetts Institute of
Technology.
Originally published in Papers of John von Neumann on 24
Computing and Computer Theory, pp. 97–142.
Copyright 1987 The MIT Press.
the order code. It thus turns out that each order must contain eighteen binary digits, the first twelve
identifying a memory location and the remaining six specifying an operation. It can now be explained why
orders are stored in the memory in pairs. Since the same memory organ is to be used in this computer for
both orders and numbers, it is efficient to make the length of each about equivalent. But numbers of
eighteen binary digits would not be sufficiently accurate for problems which this machine will solve.
Rather, an accuracy of at least 10–10 or 2–33 is required. Hence it is preferable to make the numbers long
enough to accommodate two orders.
As we pointed out in 2.3, and used in 4.2 et seq. and 5.7 et seq., our numbers will actually have 40
binary digits each. This allows 20 binary digits for each order, i.e. the 12 digits that specify a memory
location, and 8 more digits specifying the nature of the operation (instead of the minimum of 6 referred to
above). It is convenient, as will be seen in 6.8.2. and Chapter 9, Part II, to group these binary digits into
tetrads, groups of 4 binary digits. Hence a whole word consists of 10 tetrads, a half word or order of 5
tetrads, and of these 3 specify a memory location and the remaining 2 specify the nature of the operation.
Outside the machine each tetrad can be expressed by a base 16 digit. (The base 16 digits are best designated
by symbols of the 10 decimal digits 0 to 9, and 6 additional symbols, e.g. the letters a to f. Cf. Chapter 9,
Part II.) These 16 characters should appear in the typing for and the printing from the machine. (For further
details of these arrangements, cf. loc. cit. above.)
The specification of the nature of the operation that is involved in an order occurs in binary form, so that
another many-one or decoding function is required to decode the order. This function table will have six
input flip-flops (the two remaining digits of the order are not needed). Since there will not be 64 different
orders, not all 64 outputs need be provided. However, it is perhaps worthwhile to connect the outputs
corresponding to unused order possibilities to a checking) circuit which will give an indication whenever a
code word unintelligible to the control is received in the input flip-flops.
The function table just described energizes a different output wire for each different code operation. As
will be shown later, many of the steps involved in executing different orders overlap. (For example,
addition, multiplication, division, and going from the Selectrons to the register all include transferring a
number from the Selectrons to the Selectron register.) For this reason it is perhaps desirable to have an
additional set of control wires, each of which is activated by any particular combination of different code
digits. These may be obtained by taking the output wires of the many-one function table and using them to
operate tubes which will in turn operate a one-many (or coding) function table. Such a function table
consists of a matrix as before, but in this case only one of the input wires is activated at any one time, while
various sets of one or more of the output wires are activated. This particular table may be referred to as the
recoding function table.
The twelve flip-flops operating the four function tables used in selecting a Selectron position, and the
six flip-flops operating the function table used for decoding the order, are referred to as the Function Table
Register, FR.
6.4. Let us consider next the process of transferring a pair of orders from the Selectrons to the control.
These orders first go into SR. The order which is to be used next may be transferred directly into FR. The
second order of the pair must be removed from SR (since SR may be used when the first order is executed),
but cannot as yet be placed in FR. Hence a temporary storage is provided for it. The storage means is called
the Control Register, CR, and consists of 20 (or possibly 18) flip-flops, capable of receiving a number from
SR and transmitting a number to FR.
As already stated (6.1), the control must know the location of the pair of orders it is to get from the
Selectron memory. Normally this location will be the one following the location of the two orders just
executed. That is, until it receives an order to do otherwise, the control will take its orders from the
Selectrons in sequence. Hence the order location may be remembered in a twelve stage binary counter (one
capable of counting 212) to which one unit is added whenever a pair of orders is executed. This counter is
called the Control Counter, CC.
The details of the process of obtaining a pair of orders from the Selectron are thus as follows: The
contents of CC are copied into FR, the proper Selectron location is selected, and the contents of the
Selectrons are transferred to SR. FR is then cleared, and the contents of SR are transferred to it and CR. CC
is advanced by one unit so the control will be prepared to select the next pair of orders from the memory.
(There is, however, an exception from this last rule for the so-called transfer orders, cf. 3.5. This may feed
CC in a different manner, cf. the next paragraph below.) First the order in FR is executed and then the order