0% found this document useful (0 votes)
50 views24 pages

Computer MemoriesA History-Revision2

Uploaded by

ayanfarooq378
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views24 pages

Computer MemoriesA History-Revision2

Uploaded by

ayanfarooq378
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/334233647

Computer Memories, A History, Revision 2

Research · July 2019


DOI: 10.13140/RG.2.2.36686.41282

CITATION READS
1 5,712

1 author:

Thomas Cuff
Frederick Community College
99 PUBLICATIONS 4 CITATIONS

SEE PROFILE

All content following this page was uploaded by Thomas Cuff on 04 July 2019.

The user has requested enhancement of the downloaded file.


3/2/1993 Rev. 2.0 1 of 23

COMPUTER MEMORIES, A HISTORY


by
Thomas Mark Cuff, Temple University

INTRODUCTION

“In the ideal sense nothing is uninteresting; there are only uninterested people.”
- Brooks Atkinson

Memory size, 1 CPU (Central Processing Unit) and memory speed, and I/O
(Input/Output) are but three measures of the prowess of a computer. Memory size refers to the
size of the RAM (Random Access Memory) and the mass storage - usually magnetic hard disks.
Today one can be rather blasé when speaking about the number of Megs (one million bytes) of
RAM and the tens of Megs of hard disk space in a personal computer or a workstation, this casual
attitude owes its existence to the fact that semiconductor RAMs and magnetic hard disks are
based on very mature technologies. Mature technologies can, and in this case, do provide high
quality, low-cost products. But all this technological largess should not obscure the fact that
magnetic disk storage is an electromechanical system only slightly removed from Charles
Babbage’s Difference and Analytical Engines. The disadvantages are obvious and arise
exclusively from the mechanical nature of this storage system: slow access time due to the time it
takes for the disk to rotate and/or for the R/W (Read/Write) arm to move between tracks, or
cylinders in the case of multiplatter hard disks; serial, i.e. nonrandom, access to the data; and
short lifetime before the inevitable and nonrecoverable head crash takes place. 2 The fact that
most computer users are not bothered at all by these shortcomings speaks well of how mature
this technology really is. Access time has been reduced by e.g. using multiplatter disks with a
separate R/W head for each platter; 3 the nonrandom nature of the access has been vitiated by
RAM buffers associated with the disk drive controller, and allowing blocks of data to read at a
time; and while head crashes will probably never be a thing of the past, the MTBF (Mean Time
Between Failures) has been extended, and if you periodically backup your hard disk onto a tape
any data lost during a crash will be negligible.

1 It is to be understood that we are speaking of main memory [RAM], not cache or some other small size fast
memory used to buffer the difference in speed between the CPU and Main memory.
2 Optical disk drives offer relief from head crashes and somewhat better access times due to the higher density
with which information can be stored optically, but it is still an electromechanical system with the inherent slowness
that is the hall mark of such systems, and it is still nonrandom storage.
3 The ILLIAC IV supercomputer (delivered 1972, retired 1982) took the idea of the multiple R/W heads for hard
disk to its logical extreme: here, each track had its own fixed R/W head. This unique hardware modification allowed
the ILLIAC IV to outperform the newer Cray-1 and CYBER 205 (CDC) on problems with very large databases where
all the data could not fit into the main memory (RAM) at once. The access time overhead for hard disk reads/writes
to the Cray-1 and CYBER 205’s conventional disk drives made them effectively slower than the aging ILLIAC IV.
See,
R. D. Levine; Supercomputers; Scientific American; Vol. 246; No. 1; Jan. 1982; pp. 118-135.

©Tom Cuff
3/2/1993 Rev. 2.0 2 of 23

CPU and memory speed refers to both the access time of memories and the CPU cycle
time. Modern computers work in a bucket brigade fashion with the CPU pulling in data from the
memories, manipulating this data and sending it back to either the main memory (RAM) or to the
mass storage device (hard disk). As a result of these data transfers, it would seem that the
computer as a whole could be no faster than its slowest subsystem, usually the hard disk. That
computers are faster than their slowest subsystem is a fact which owes its existence to various
design tricks which buffer the data transfers and allow simultaneous processing. By far the
largest data buffer in a computer is its main memory (RAM). Since the main source of lethargy is
the hard disk, it behooves you to be able to load the complete program and data - both input and
output data - into main memory all at once, thus relieving the computer’s OS (operating System)
of having to swap in or out sections of code or data between the main memory and the hard disk.
In the event the whole program cannot be accommodated by the main memory, overall system
speed can still be saved by writing the code in an efficacious manner, i.e. segmenting the code.
In segmenting, the code is broken up into a root segment, which always resides in main memory,
and a series of other code modules which are called by the root module and which in turn can call
still other modules. Once a given module has completed its task it is swapped out of memory to
make way for the next module that will be called.

One of the seemingly immutable natural laws of computer science is that CPU speed, as
measured by cycle time, is always faster than memory speed as measured by access time. This
law held true in 1942 when John V. Atanasoff, together with Clifford Berry, finished building their
special purpose computer for solving systems of linear equations, and still holds true today when
Seymour Cray is putting the finishing touches on his Cray-3 supercomputer. Atanasoff’s main
memory consisted of two rotating drums having brass pins protruding through the insulated
surface of the drums; each brass pin was connected to one lead of a paper capacitor embedded
in wax inside the drum, the other lead of the capacitor was grounded to the common shaft that
drove both drums; stationary brushes made contact with the rotating brass pins. The binary data,
represented as a voltage on the capacitors, was feed to the electronic logic unit, which employed
vacuum tubes; the output from the electronic logic unit was then returned to the drum for
temporary storage. It should be obvious from this description that the main memory was painfully
slow compared to the electronic logic. While the Cray-1 & Cray-2 both employ silicon chips in
both the CPUs and main memories, the Cray-3 will employ gallium arsenide chips in the CPUs for
speed enhancement while sticking with silicon chips for main memory due to their enormously
higher density of gates (storage locations), reliability, low cost, and availability. 4 Hence, again
the natural law of computers holds: CPU speed > memory speed. Comparing Atanasoff’s
computer with the Cray-3 is appropriate not just from the point of view of illustrating the natural
law of computers, but also because both are vector machines and, hence, good examples of what
is now commonly called MPP (Massively Parallel Processors). It is probably comes as no
surprise to most people that the Crays are vector machines, i.e. use parallel processors to
increase the speed on those problems that are amenable to this approach, but it is less

4 J. Hecht; Physical Limits of Computing; Computers in Physics; Vol. 3; No. 4; Jul./Aug. 1989; pp. 34-40.

©Tom Cuff
3/2/1993 Rev. 2.0 3 of 23

appreciated that Atanasoff’s computer was also a vector machine, in fact it was the first electronic
MPP. 5

I/O is the frosting on the cake, so to speak. A twelve inch thick sheaf of fan fold paper
containing the numerical output of a supercomputer to a fluid flow problem usually does no one
any good, by itself. But if this numerical data can be displayed on a color workstation with the
flow velocities color coded (blue = slowest velocity to red = fastest), then data is comprehensible
to the user; some solutions may even necessitate animation of the graphic representation for true
user comprehensibility. A supercomputer without good I/O is a dead end.

Of these the three measures of computer prowess that I have discussed: memory size,
CPU (Central Processing Unit) and memory speed, and I/O (Input/Output), no one stands above
the others in importance. But for the purpose of this paper I shall restrict myself to main memory
size and, since its is hard to talk about one without at least mentioning the other, memory speed.

HISTORY

“Fear created Gods, boldness created kings.”


- Prosper Jalyot

Most histories of computer hardware 6 begin with the origins of the idea of computating
machines, this standard approach is valid but does not leave the reader with a full appreciation of
the rôle of technological evolution and cross discipline fertilization that ineluctably leads to the
realization of a particular device or system. The approach I shall take here is to emphasis the
rather nonlinear linkages between noncomputer related inventions and the genesis and evolution
of the computer memory.

FLIP-FLOP STORAGE

By 1900, the technology involved in the construction of electroscopes had matured to


such a point that experimenters could justifiably expect that leakage currents would be negligible.
However, much to their chagrin these leakage currents could not be reduced to zero. No matter
what precautions were taken, such as: keeping the electroscopes in the dark, shielding them with

5 Alice R. Burks, Arthur W. Burks; The First Electronic Computer, The Atanasoff Story; The University of
Michigan Press; 1987. [This book presents an exhaustive look at the design and operation of Atanasoff’s computer
together with excerpts and analysis of the litigation which ultimately stripped Eckert and Mauchly of their patent for
the electronic computer. Arthur W. Burks, himself, worked on the ENIAC with Eckert and Mauchly.]
A. R. Mackintosh; Dr. Atanasoff’s Computer; Scientific American; Vol. 259; No. 2; Aug. 1988; pp. 90-96
R. Slater; Who Invented the Computer?; Computers in Physics; Vol. 1; No. 1; Nov./Dec. 1987; pp. 44-49. [This
article has two rather amusing errors, 1) it states that Atanasoff’s PhD thesis was entitled “The Dialectic[sic] Constant
of Helium.” and 2) it indicates that the card puncher had a failure rate of once every 100,000 times, when in fact the
rate was less one in 10,000, i.e. a very high failure rate, which limited the size of the linear system of equations that
could be solved using the Atanasoff/Berry computer.]
6 See for example, Herman H. Goldstine; The Computer from Pascal to von Neumann; Princeton University
Press; 1972.

©Tom Cuff
3/2/1993 Rev. 2.0 4 of 23

large amounts of lead, doing all the experiments underground in caverns or railroad tunnels, etc.,
they always eventually lost their charge. The most obvious culprit was the natural radioactivity in
the environment, for instance inside the caverns and tunnels within which the experiments were
carried out. 7 It was discovered, much later on, that part of the residual ionization of the gas
inside the electroscopes was an artifact of the presence of naturally occurring radionuclides in the
metals used in the construction of the electroscopes. 8 Thus there were two sources of
measurement artifact: 1) external artifacts due to environmental radiation and 2) internal artifacts
due to radioactive contaminants in the material of the electroscope, itself. In an attempt to verify
the environmental radioactivity hypothesis, Father Thomas Wulf carried a rugged electroscope of
his own design 9 to a location as far from the surface of the Earth as he could find, the top of the
Eiffel Tower. 10 11 What he found when he analyzed his data was very puzzling, because the
rate of discharge of his electroscopes, although less than at sea level, was still much higher than
he had calculated assuming that the gamma ray issuing from the earth were attenuated by the
intervening blanket of air. Note, he explicitly assumed that the Eiffel Tower was not itself a source
of radioactivity, an assumption later proven to be false. Wulf’s results implied that there was an
extraterrestrial source of radiation. The extraterrestrial nature of this radiation was unequivocally
shown by Victor Hess around 1912, when he took sealed and unsealed Wulf-type electroscopes
along with himself in hot air balloons to a maximum height of approximately 5 km (3.11 statute
miles) and showed that instead of decreasing, the rate of leakage increased by a factor of ~3X.

7 The amount and distribution of naturally occurring radioactivity is larger than most people assume,
E. Rutherford; Radium-the cause of the Earth’s Heat; Harper’s Monthly Magazine; Dec. 1904- May 1905; pp.
390-396.
R.J. Strutt; On the Distribution of Radium in the Earth’s Crust, and on the Earth’s Internal Heat; Proceedings of
the Royal Society; Vol. 77 (Series A); Jun. 1906; pp. 472-485.
8 This loss of charge due to the ionization produced by trace amounts of radionuclides in the metal used in the
construction of the electroscope has its modern counterpart in the phenomenon of single event upset (SEU). As
semiconductor memories were made smaller and denser, a point was reached in the mid 1970s when manufacturers
began to notice a disturbing rise in soft errors: a memory cell would mysteriously lose its charge and so would be
storing the wrong logical value, but the memory cell was not irreversibly damaged in any way, hence these were
called soft errors. The source of these errors was eventually found to be trace amounts of radioactive contaminants
in the glass, ceramic and heavy metals used in the construction of the package and the chip itself. The following
references should be consulted for a more detailed description of this problem,
T.C. May, M.H. Woods; Alpha-Particle-Induced Soft Errors in Dynamic Memory; IEEE Transactions on Electron
Devices; Vol. ED-26, 1979, pp. 2-9.
Charles S. Guenzer; Reliability Problems Caused by Nuclear Particles in Microelectronics; in Forrest L. Carter
(Ed.); Molecular Electronic Devices; Marcel Dekker, Inc.; 1982; chap. XXI, pp. 273-278.
9 T. Wulf; Ein neues Eektrometer für statische Ladungen III (A new electrometer for static charges III);
Physikalische Zeitschrift; Vol. 10; Apr. 15, 1909; pp. 251-253. [See also, Science Abstracts, Series A; Vol. 12A;
1909; Abstract No. 1104.]
10 Completed in 1889 for the Paris Exhibition, the Eiffel Tower was the tallest (~310 meters or 945 feet) free
standing structure in the world, and held this record for 40 years until 1929 when the Chrysler Building in NYC
surpassed it. The builder of this tower, Alexandre Gustave Boenickhausen-Eiffel, is less well known as the architect
of the skeleton of the Statue of Liberty; the skeleton is the tower which holds up the outer skin of the Statue of Liberty.
For more information on this amazing man and his creations see,
Mario Salvadori; Why Buildings Stand Up; MaGraw-Hill Paperbacks; 1982; chap. 8, pp. 126-143.
11 T. Wulf; Beobachtungen über die Strahlung hoher Durchdringungsfähigkeit auf dem Eiffelturm (Determination
of Penetrating Radiation on the Eiffel Tower); Physikalische Zeitschrift; Vol. 11; Sept. 15, 1910; pp. 811-813. [See
also, Science Abstracts, Series A; Vol. 13A; 1910; Abstract No. 1719.]

©Tom Cuff
3/2/1993 Rev. 2.0 5 of 23

By 1914 Werner Kolhörster had extended Hess’s data to 9 km (5.59 statute miles) - also by
ascent in a balloon - and showed that the leakage rate was >10X that at sea level. 12

With their existence confirmed, cosmic rays became the focus of much research into their
makeup and properties. While electroscopes were instrumental in uncovering their existence,
cosmic rays could not be easily studied using electroscopes because these particular instruments
were integrating detectors and rather insensitive detectors at that. To the rescue came the
Geiger-Müller tube, which could produce a sensible response to the passage of just a single
cosmic ray particle. G-M tubes, which are nothing more than two concentric cylindrical electrodes
maintained at a high DC potential difference with a gas in between them, work by amplifying the
initial ionization produced in the gas of the tube. The amplification occurs as the charge carriers
of the initially created ion pair(s) are separated and accelerated towards their respective
oppositely charged electrodes, these rapidly moving particles collide with other gas molecules
causing more ionizations, etc.; the net result is that the interaction of a single cosmic ray particle
causes a cascading series of ionizations, known as an electrical avalanche. However, sensing
and displaying of the individual avalanches was done by way of an electroscope. Using two G-M
tubes, one mounted above the other, Walter Bothe and Werner Kolhörster in 1929 were able to
detect coincidences, i.e. simultaneous - or at least as simultaneous as one could detect by eye -
ionizations of the two G-M tubes caused presumably by the same cosmic ray particle traversing
both tubes. These coincidences persisted even after a 4 cm. thick block of gold was placed
between the two G-M tubes, which indicated that they possessed extraordinary penetrating
powers. Around 1930 Bruno Rossi replaced the electroscopes used by Bothe and Kolhörster in
their cosmic ray telescope, as it was called back then, with an electronic coincident circuit
composed of vacuum tubes. This innovation, besides allowing Rossi to make some fundamental
discoveries such as the presence of showers by looking for coincidences with a cosmic ray
telescope composed of three G-M tubes each situated at the apex of an equilateral triangle,
ushered in the age of electronic counting of events. 13

With signals from the G-M tubes being conditioned by vacuum tubes, the stage was set
for extending the rôle of vacuum tubes to actually counting the number of pulses or coincidences.
Electronic counters or scalers, which permitted one to count pulses (avalanches) from a single G-
M tube or coincidences from arrays of multiple G-M tubes, owed their existence to a circuit

12 A. M. Hillas; Cosmic Rays; Pergamon Press; 1972; chap. 1.


13 B. Rossi; Method of Registering Multiple Simultaneous Impulses of Several Geiger’s Counters; Nature
(London); Vol. 125; No. 31; Apr. 26, 1930; p. 636. [Note, Rossi was not the first person to employ vacuum tubes as
coincidence detector, and sez so at the very start of his Nature article. According to Rossi, W. Bothe first used a
vacuum tube with two grids (a tetrode) as a coincidence detector for two G-M tubes; the reference for this work is,
W. Bothe; Zur Vereinfachung von Koinzidenzzählungen (To Simplify Matters of Coincidence Counting);
Zeitschrift für Physik; Vol. 59; Dec. 1929 - Jan. 1930; p. 1.
Rossi’s original contribution was to use an arrangement of triodes as the coincidence circuit, this scheme had the
obvious advantage that coincidences in any number of G-M tubes could be detected.]

©Tom Cuff
3/2/1993 Rev. 2.0 6 of 23

created by William Henry Eccles and F. W. Jordan, 14 called appropriately enough the Eccles-
Jordan circuit. 15 Eccles and Jordan were attempting to create what they called a continuous
action relay, closely akin to a latching electromechanical relay. With a slight modification, the
Eccles-Jordan trigger relay circuit can be made to be reversibly bistable (two stated) in which form
it becomes, and is called in modern jargon, a flip-flop (F/F). A chain of these F/Fs, one cascading
into the other, forms a very serviceable binary ring counter. The combination of the G-M tube, the
Rossi coincidence circuit and the ring counter made up of Eccles-Jordan circuits allowed
physicists to count cosmic rays and any other radiation they were interested in.

This brings us to around 1940 and Ursinus College where John Mauchly was teaching
physics, and thinking about building electronic computers. 16 In his quest for the basic electronic
building blocks of computers: memory and arithmetic logic units, he quite naturally tried adapting
the circuits which had been perfected for counters/scalers. Later on when, after successfully
completing an electronics course at the Moore School of the University of Penna. and taking a
position at the Moore School, Mauchly continued to think about how to employ flip-flops
configured as ring counters for doing computation. As a result of a proposal penned by Mauchly
for an automatic machine to calculate firing tables, 17 the ENIAC was born. Actually, the ENIAC
did not come into existence all at once. Showing good common sense, Eckert and Mauchly first
constructed and tested one of the accumulators to proof their design of the whole computer. An
accumulator, which was simply a ten stage binary ring counter, performed two functions storage

14 William Henry Eccles is justifiably famous for many things not the least of which include: his painstaking
experimental investigations into the behavior of the coherer; his nomenclature for electronic tubes (diode, triode,
tetrode, pentode, etc.); an electronic oscillator composed of a triode and a tuning fork; the Eccles-Jordan family of
circuits; and his investigations into the propagation of radio over the horizon by the Kennelly-Heaviside layer in the
ionosphere. In contrast, there is next to nothing known about the life and accomplishments of F. W. Jordan. For a
short biography of Eccles see,
W.A. Atherton; Pioneers; Electronics & Wireless World; Vol. 96; Oct. 1990; pp. 908-910.
15 W.H. Eccles, F.W. Jordan; A Trigger Relay Utilising Three-Electrode Thermionic Vacuum Tubes; Electrician
(London); Vol. 83; Sept. 19, 1919; pp. p. 298. See also their article of the same title in Radio Review; Vol. 1; Dec.
1919; pp. 143-146. [ Eccles and Jordan also designed a variant of the Eccles-Jordan trigger relay, which was
astable and so produced a continuous train of rectangular pulses and is known as a multivibrator. The reference for
the multivibrator is,
W.H. Eccles, F. W. Jordan; A Method of Using Two Triode Valves in Parallel for Generating Oscillations;
Electrician (London); Vol. 83; Sept. 19, 1919; p. 299.]
16 John W. Mauchly; The ENIAC; in N. Metropolis, J. Howlett, Gian-Carlo Rota (Eds.); A History of Computing in
the Twentieth Century; Academic Press; 1980; pp. 541-550.
17 Firing and/or bombing tables give the path (trajectory) of a cannon shell or an aerial bomb by solving a system
of two coupled differential equations,

y” = - Ey’ - g
x” = - Ex’
where
E = exp( - hy ) * G(v) / C& v = sqrt[ (x’)2 + (y’)2]

and g and h are fixed constants; C is a constant which depends of the type of shell or bomb and G(v) is the ballistic
drag function. G(v) is an empirical function obtained by actually firing large numbers of shells of a given type.
Because of its empirical nature the drag function was entered into the computer in the form of a lookup table,
specifically it was stored in the ENIAC’s ROM, which was a square resistor matrix invented independently by Jan
Rajchman and P. Crawford.

©Tom Cuff
3/2/1993 Rev. 2.0 7 of 23

and addition/subtraction. The use of tubes as memory was both expensive and presented
reliability concerns. In addition, their cost mitigated against a large size memory, the ENIAC
could store only twenty ten digit decimal words. Reliability of systems composed of large number
of tubes was a total unknown at the time Eckert and Mauchly started constructed the subsections
that would eventually be connected together to form ENIAC. Eckert’s pragmatic approach to the
problem of reliability was threefold,

1) Pre-screening and burning-in the tubes. Pre-screening assured


that the electrical parameters of the tubes, such as
transconductance, were adequate. Burning-in the tubes eliminated
failures due to the infant mortality syndrome.
2) Derating the filament and plate/grid power dissipations, e.g. the
filament voltage was reduced from 6.3 VAC to 5.7 VAC. It was an
accepted and verified fact that the reliability of electronic
components decrease with increasing power dissipation.
3) Keeping the power to the filaments on continuously even when
the machine was idle. This rule arises from the observation that
most tube failures occur during the warm-up period, when the
filament power is first being applied. 18

The reliability of vacuum tubes reached its zenith with the Whirlwind computer (circa
1947), where through carefully processing, screening and burn-in the mean time to failure was
raised from ~500 hours to about 500,000 hours. This level of reliability was not attained with
transistors in computers until about 10-15 years after they were first introduced to this same
application. 19 As far as the perception that vacuum tube are less rugged than transistors,
consider the fact that they were the basis of the VT (Variable Time) fuzes used during WWII and
the Korean War. 20 Before the advent of the VT fuze, there were only two ways for a canon shell
to be effective, either it had to score a direct hit or, in the case of antiaircraft rounds, its burst
altitude had to be preset to the actual height of the attacking airplanes so that a near miss would
be effective due to the resulting shrapnel distribution. All this changed with the VT fuze, which
allowed the shell to sense when it was close to its target. Now a near miss was almost always as
good as a direct hit. The VT fuze contained an RF oscillator (transmitter: a single triode vacuum
tube), a tuned RF amplifier (receiver: two pentode or tetrode vacuum tubes and a thyratron) and
separate transmitting and receiving antennas; the modification of the antenna patterns caused by
proximity to an airplane or surface of the earth/water was sensed by the tuned RF amplifier, which
in turn triggered a thyratron which set off the shell’s explosive charge. The tubes installed in
these VT fuzes had to be able to withstand at least 20,000 g, and at the same time exhibit

18 The designers of the IAS (Institute for Advanced Studies) computer believed that mechanical shock due to
sudden thermal expansion was the culprit behind tube failures that coincided with power on, and so arranged that
the filament power would be applied gradually in steps up to the, presumably derated, maximum. See,
J.H. Pomerene; Historical Perspectives on Computers - Components; in AFIPS Conference Proceedings; Vol.
41; Pt. II; Dec. 5-7, 1972; pp. 977-983.
19 Jay W. Forrester; Reliability of Components; Annals of the History of Computing; Vol. 5; No. 4; Oct. 1983; pp.
399-401.
20 Ralph B. Baldwin; The Deadly Fuze, Secret Weapon of World War II; Presidio Press; 1980.

©Tom Cuff
3/2/1993 Rev. 2.0 8 of 23

minimal microphonics! In addition, these tubes had to be no larger than a paper clip in order to fit
inside the fuze housing along with passive components and a battery.

Finally, I should like to point out a rather obvious but nevertheless overlooked similarity
between ENIAC, MANIAC, BINAC, UNIVAC, etc. of yesteryear and the Crays, CYBERs, IBMs
and ETAs of today, namely heat. The use of tubes in normal consumer or military electronics did
not usually require extravagant cooling schemes, but this all changed with the coming of the
ENIAC with its extremely high packing density of tubes. Now, removing the heat became of prime
importance. Likewise, the supercomputers of today, with their super dense packaging of
semiconductor electronics, have stretched the envelope of electronic component heat transfer to
the breaking point, again. In fact, the use of inexpensive transistors and ICs has actually
exacerbated the heat transfer problem, which is counterintuitive since one would assume the heat
load to have gone down after introduction of semiconductors. Here are some quantitative
comparisons,

ENIAC Computer, manufactured by the Moore School of the


University of Penna.. CPU: octal socket vacuum tubes (many of
them were dual triodes). Main memory: vacuum tubes. Filaments
dissipated 80 Kw, the DC power supply produced 40 Kw, and the
blower system used 20 Kw, total dissipation = 140 Kw; total volume
of, 100 ft X 10 ft X 3 ft = 3000 ft 3; forced air cooled @ below room
temperature (air conditioned). Energy density (Kw/ft 3) = 0.047. 21

Cray-1, -1/S, -1/M Supercomputer, manufactured by Cray


Research. CPU: bipolar transistor constructed uniprocessor. Main
memory: Cray-1, bipolar ECL (Emitter Coupled Logic); Cray-1/S,
BJT (Bipolar Junction Transistor; Cray-1/M, MOS (Metal Oxide
Semiconductor). Total power dissipation ~100 Kw; total volume
<400 ft3; cooling via conduction through copper circuit boards to a
liquid Freon cold bar. Energy density (Kw/ft3) = 0.250, notice that
the Cray-1’s energy density is actually higher than that of the
ENIAC. 22

CYBER-205 Supercomputer, manufactured by CDC (Control Data


Corporation), Minneapolis, MN. CPU: bipolar ECL (Emitter
Coupled Logic). Main memory: bipolar ECL (Emitter Coupled
Logic). Cooling: liquid Freon cold bar in contact with each of 150
LSI (Large Scale Integration) chips dissipating ~5 w housed on a

21 A.W. Burks; Electronic Computing Circuits of the ENIAC; Proceedings of the IRE; Vol. 35; Aug. 1947; pp. 756-
767.
22 Jack R. Thompson; The CRAY-1, The CRAY X-MP, the CRAY-2 and Beyond: The Supercomputers of Cray
Research; in Sidney Fernbach (Ed.); Supercomputers, Class IV Systems, Hardware and Software; North-Holland;
1986; pp. 69-81.

©Tom Cuff
3/2/1993 Rev. 2.0 9 of 23

single multilayer circuit board of <1 ft2 area (~750 w of total


dissipation per board). 23

IBM 3081 Supercomputer, manufacturer by IBM (International


Business Machines). CPU: unknown. Main memory: unknown.
Cooling, the logic chips (dissipating about 1.5 w each) are housed
in superclusters containing up to 119 of these chips on a 90 mm 2
substrate (33 layer ceramic circuit board) that is part of the TCM
(Thermal Conduction Module). Each chip in the TCM is brought
into good thermal contact with the top half of the TCM via spring
loaded metal plungers, pressing down on the tops of the chips, and
with helium gas, which fills the empty space inside the TCM; the top
plate of the TCM carriers the heat away by way of chilled water. 24

ETA10 -E Supercomputer, manufactured by ETA System, Inc., St.


Paul, MN. CPU: Si CMOS consisting of a 40 layer, 420 mm X 570
mm (16.535 inches X 22.440 inches) board populated with 240
VLSI gate arrays each dissipating approximately 3-4 watts (~1 Kw
total dissipation of which 0.2 Kw is parasitic heat leakage, probably
from the memory board which is plugged directly into the CPU
board); cooled to 77 K with LN2; clock cycle is 14 nS @ room
temperature compared to 7 nS @ 77 K. Memory: Si , probably
MOS; air cooled @ room temperature. 25

Cray-2 Supercomputer, manufactured by Cray Research. CPU:


bipolar transistor constructed multiprocessor (four independent
processors). Main memory: MOS (Metal Oxide Semiconductor).
Total power dissipation ~100 Kw (assumed to dissipate about the
same amount as a Cray-1); total volume <60 ft3 (16 ft2 footprint X
3.75 ft high); cooling via liquid (fluorocarbon) immersion. Energy
density (Kw/ft3) = 1.67, notice that the Cray-2’s energy density is
also higher than that of the ENIAC and Cray-1. 26

23 Neil R. Lincoln; Technology and Design Tradeoffs in the Creation of a Modern Supercomputer; in Sidney
Fernbach (Ed.); Supercomputers, Class IV Systems, Hardware and Software; North-Holland; 1986; pp. 83-111.
24 J.T. Pinkston; Supercomputer Trends and Needs: A Government Perspective; in N. Metropolis, D.H. Sharp,
W.J. Worlton, K.R. Ames (Eds.); Frontiers of Supercomputing; University of California Press; 1986; pp. chap. 3, pp.
124-140.
25 R.K. Kirschman; Low Temperature Electronics; IEEE Circuits and Devices; Vol. 6; No. 2; March 1990; pp. 12-
24.
26 R. H. Ewald; Perspectives From the Field: Cray Research; Computers in Physics; Vol. 3; No.1; Jan./Feb.
1989; pp. 33-38.

©Tom Cuff
3/2/1993 Rev. 2.0 10 of 23

FIGURE 1A – Cray XMP-24 Mainframe (Source: Photo by Tom Cuff at the National
Cryptographic Museum, Fort Meade, Maryland.)

©Tom Cuff
3/2/1993 Rev. 2.0 11 of 23

FIGURE 1B – Cray XMP-24 Mainframe, Detail of One of the Power Wedges (Source:
Photo by Tom Cuff at the National Cryptographic Museum, Fort Meade, Maryland).

MERCURY DELAY LINE STORAGE

The discovery of radar necessitated the creation of new electronic circuits both to transmit
the RF pulses and to detect and display the resulting RF echoes. One of the circuits associated
with radar, which was to later play an important part in the commercial development of computers,
was the acoustic delay line. In its simplest form an acoustic delay line consists of a long, liquid-
filled tube with an electrically connected quartz crystal at both ends; one quartz crystal serves as
the ultrasonic transmitter while the other was the receiver. A 1-15 MHz sinusoidal carrier
modulated by a narrow pulse is transmitted down the tube, and the time it takes to travel down the
tube to the receiving transducer is the delay; the time delay can anywhere from a few hundreds of
microseconds to as long as 3500 µs.

©Tom Cuff
3/2/1993 Rev. 2.0 12 of 23

I have not found a detailed history of this device, but I have been able to dredge up the
following historical facts and anecdotes. According to Emslie et al., 27 the first commercially
produced delay line was invented by William Shockley and was in production at Bell Labs by
1942. This particular delay line employed a mixture of water and ethylene glycol (automotive
antifreeze) maintained at a constant temperature of 55 C - temperature control was essential for a
fixed carrier frequency due to the sensitivity of the speed of sound to the density of the medium in
which it propagates. Shockley’s delay line was used for the production calibrated trigger pulses.
Mercury delay lines were also adapted for pulse cancellation in radar at the MIT Radiation Labs
around 1943. It should be mentioned that solid acoustic delay lines were also investigated, but
were found to be more apt to introduce distortion and unwanted signals due to the excitation of
surface waves and transverse modes, and as a result they never caught on.

Emslie et al. also say that the acoustic delay line was independently discovered at the
Moore School of the University of Penna. during the summer of the 1943, presumably - although
they do not say so - by Eckert and Mauchly. Eckert and Mauchly’s utilization of this device was
as a serial, i.e. non-random, dynamic memory. The binary data was represented by the presence
or absence of a pulse as the output of the delay line was periodically sampled; the output signals,
after the appropriate amount of cleaning up, needed due to attenuation and/or dispersion suffered
in traveling through the delay line, were then feed back to the input.

It should be noted that it was by no means obvious a priori that delay lines could indeed
function successfully as computer memories. Herman H. Goldstine recounts 28 how, sometime
in 1946-47 [Jan. 1947, according to Andrew Hodges in his book on Turing’s life], Alan M. Turing,
who was visiting America to take part in a symposium held at Harvard to celebrate the
inauguration of Howard Aiken’s Mark II computer, proved mathematically to Goldstine and John
von Neumann, using a signal-to-noise argument, that such an application of the acoustic delay
line could not possibly work. This story serves to highlight the dichotomy between theory and
practice or maybe between imagination and knowledge. 29 It should be noted for the sake of
completeness that Goldstine’s interpretation of these events, i.e. that Turing did not believe that

27 A.G. Emslie, H.B. Huntington, H. Shapiro, A.E. Benfield; Ultrasonic Delay Lines II; Journal of the Franklin
Institute; Vol. 245; No. 2; Feb. 1948; pp. 101-115. [See also, H.B. Huntington, A.G. Emslie, V.W. Hughes; Ultrasonic
Delay Lines I; Journal of the Franklin Institute; Vol. 245; No. 1; Jan. 1948; pp. 1-23.]
28 Herman H. Goldstine; ibid.; chap. 7.
29 A true story which further illustrates the difference, between what we think we know and what we actually
know, is the bumblebee can’t fly episode. [There were, at least, two such episodes, one having to do with nerve
conduction and the other with aerodynamics. We shall restrict ourselves to the former.] After A. L. Hodgkin and A. F.
Huxley’s landmark work on nerve conduction, someone decided to look at the problem of how a bumblebee’s brain
(ganglia) was able to direct the movement of its wings. Knowing the speed at which the wings flap, the distance
between the brain and the muscles moving the wings, and the diameter of the nerves (which dictates the conduction
speed) linking the two, it could be shown that, assuming one nerve impulse per flap, the conduction nerve speed is
insufficient, i.e. the bumblebee can’t fly. Now, of course, the bumblebee kept flying so there was obviously a problem
somewhere, but not with the insect. It turned out that the wing muscles are more or less autonomous of the brain,
the rate at which they contract is controlled by these same muscles sensing directly among other things the load on
the wings. Autonomous muscles like these are known as myogenic; the human heart is example of a myogenic
muscle containing its own clock, the SA node. A clear instance where imagination is not more important than
knowledge. If we wanted to relate this to the computer field, we could say that the wing muscles represent an
example of parallel processing.

©Tom Cuff
3/2/1993 Rev. 2.0 13 of 23

acoustic delay lines could work as memory elements, is contradicted directly by Andrew Hodges.
30

In fact, the construction of efficacious delay lines, whether for use as memories or for any
other purpose, required an enormous number tricks. According to Emslie et al., for example, the
inside of the steel tube must be coated with lacquer so that the mercury will wet the surface, thus
preventing the trapping of air bubbles during the filling process, which could lead to unwanted
reflections. Temperature control of the mercury delay line was not necessary to assure that the
time spacing between pulses was constant, the master oscillator could be controlled in such a
manner that its frequency was continuously adjusted to maintain a constant delay, but this
approach was usually not employed since it caused problems with the other circuits of the
computer. The method used to control the temperature in the UNIVAC delay lines was a two tier
system composed of a coarse and fine control. The UNIVAC delay lines were arranged in
cylindrical fashion, 18 lines to the tank, with the recirculation electronics situated around the
periphery of the tank. Coarse control was obtained by way of an expansion bellows, which
sensed the change in volume of the mercury in the tank, and communicated this information by
means of a on-off switch to the heaters. The fine temperature control was achieved by dedicating
one of the 18 delay lines to storing all “1s”, i.e. filling the line with equally spaced pulses - “0” are
represented as the absence of a pulse. If the delay time is a integral multiple of the pulse
modulation rate, the pulses at the output of this line can be compared to reference pulses, i.e.
input pulses to the line, with a phase comparator; any deviation in the delay time produces a
signal from the phase comparator which is used to fine tune the heaters. 31

The idea of storing information in a delay line as acoustic pulses was coopted to storing
information as electromagnetic pulses in long coaxial cable; the logic here being that the access
time would faster and there would be no need to convert electrical signals to acoustic ones and
then convert them back again at the other end. This electromagnetic delay line memory became
practical with the advent of the tunnel (Esaki) diode. Besides its use in the electromagnetic delay
line, the tunnel diode sparked a renaissance in the design of digital logic elements. 32 It has even
been claimed by the professor teaching EE 501, that the use of tunnel diodes was pivotal to the
improved speed of the RCA Spectra 70 line of computers. 33

30 Andrew Hodges; Alan Turing, The Enigma; Simon and Schuster; 1983; p. 355.
31 J.P. Eckert; A Survey of Digital Computer Memory Systems; Proceedings of the IRE; Vol. 41; Oct. 1953; pp.
1393-1406.
32 Anon.; Tunnel Diode Manual; General Electric; 1961; section 5.4, chap. 6. [In case this reference is
unavailable, try the following citations, which were taken from this same manual,
W. Chow; Tunnel Diode Digital Circuitry; IRE Transactions on Digital Computers; Vol. EC-9; No. 3; Sept. 1960;
pp. 295-301.
E. Goto et al.; Esaki Diode High Speed Logical Circuits; IRE Transactions on Electronic Computers; Vol. EC-9;
Mar. 1960; pp. 25-29.
33 Of course, since I was unaware of both the RCA Spectra 70 computer and its purported use of tunnels diodes,
I decided to do some looking into the subject. According to Nisenoff and Totaro, the Spectra 70 series of computers
was intended to grab some of the mainframe market of the IBM 360 series of computers. The CPU utilized silicon
integrated circuits of either DTL or ECL [Lohman & Basara] and the main memory was monolithic ferrite cores
[Rajchman] with a (fast) scratch pad memory also composed of cores [Beard]. Although, absence of proof is not
proof of absence, you cannot prove that this series of computers used tunnel diodes from the references listed below,

©Tom Cuff
3/2/1993 Rev. 2.0 14 of 23

CATHODE RAY TUBE (ELECTROSTATIC) STORAGE


[NOTE TO THE READER, THIS SECTION IS NOT COMPLETE. I RAN OUT OF TIME.]

With the advent of radio, which allowed first the transmission of Morse code (CW) and
then voice communication (AM, FM, SSB, etc.), it was not very long before people thought about
transmitting pictures through the œther. The early history of television, which started back in the
late 1800s, is very complicated and not germane to the issue of computers. We will pick up the
action around the 1920s when a Russian immigrant named Vladimir Kosma Zworykin invented
the Iconscope (television camera tube) and the Kinescope (television picture tube). 34 Of course,
Zworykin and RCA weren’t the only ones pursuing the idea of commercial television, in Britain,
James Dwyer McGee and W. F. Tedham at EMI (Electrical and Musical Instruments, Limited, now
Thorn-EMI) had also invented the Iconoscope, only they called it the Emitron. Luckily, for
Zworykin, his patent clearly had priority, and in addition it turned out that RCA and EMI had a
reciprocal technical interchange agreement (including patents), which also served to further
invalidate McGee and Tedham’s patent. But contrary to most popular accounts, improvements in
the technology of television did not always flow from RCA to EMI. That the technical information
exchange was not one sided owes a tremendous debt to an English electronic engineer named
Alan Dower Blumlein, who was such a prodigious circuit designer that some people called him
the Edison of electronic circuits.

Alan D. Blumlein certainly deserves more recognition than he has received, after all he
invented the moving coil, electronically damped record cutting head; devised all the basic
elements for stereo audio recording and playback, including the 45 degree angle cut of the
records; employed negative feedback before Harold S. Black [in truth, many other engineers had
also used negative feedback before Black]; invented the circuits which made television
commercially feasible, such as the linear sawtooth current generator; invented the Miller
integrater (a name picked by Blumlein himself to reflect the fact that this circuit made use of a
undesired interelectrode capacitance first discovered by Miller), which is the basis for time base
circuits in radar and oscilloscopes; designed the famous Blumlein switch, which was a outgrowth
of his wartime work on radar [during which time he worked with, among others, F. C. Williams
who we shall meet later on in this section on electrostatic storage]; etc. 35

N. Nisenoff; Hardware for Information Processing Systems: Today and in the Future; Proceedings of the IEEE;
Vol. 54; No. 12; Dec. 1966; pp. 1820-1835.
J.B. Totaro; RCA Spectra 70, A Compatible Competitor; Data Processing Magazine; Jun. 1966; pp. 54-61.
R.D. Lohman, S.E. Basara; Integrated Circuits for Use in the RCA Spectra 70 Series Computer; Proceedings of
WESCON; 1965; sect. 12.2, pp. 1-18.
J.A. Rajchman; Memories in Present and Future Generations of Computers; Proceedings of WESCON; 1965;
sect. 12.4, pp. 1-7.
A.D. Beard; RCA Spectra 70, Basic Design and Philosophy of Operation; Proceedings of WESCON; 1965; sect.
12.1, pp. 1-8.
34 W.A. Atherton; Pioneers; Electronics & Wireless World; Vol. 93; Oct. 1987; pp. 1019-1020.
Vladimir K. Zworykin, G.A. Morton; Television; John Wiley & Sons; 1940. [This book describes every facet of
the television industry from how to deposit the phosphor on the inside face of the kinescope to how to light a TV
studio.]
35 W.A. Atherton; Pioneers; Electronics & Wireless World; Vol. 94; No. 1624; Feb. 1988; pp. 184-186. [It almost
seems as if there is an ongoing conspiracy to keep Blumlein’s story from reaching the general public. Consider that
the aforementioned article is missing from the table of contents of the issue of Electronics & Wireless World in which

©Tom Cuff
3/2/1993 Rev. 2.0 15 of 23

Zworykin’s Iconoscope employed a mosaic of metal/metal oxide globules on one side of a


mica sheet, the completely metallized opposite side of the mica sheet functioned as the sense
electrode. With this arrangement suspended in a vacuum, and the image of a scene focused
onto the mosaic side of the mica sheet, photoelectrons are ejected from the cesium/cesium
oxide/silver/silver oxide globules in proportion to the light intensity… [To Be Done]

John Presper Eckert claims to have been the first person to have proposed using a
cathode ray tube (kinescope) as a computer memory system. Nevertheless, it was an
Englishman named Fredric Calland Williams who first put Eckert’s idea into practice by building
the Manchester University 1949 computer, which employed off-the-shelf commercial oscilloscope
tubes as its main memory. It must be noted that F. C. Williams, along with a number of other
English scientists and engineers, attended a series of lectures given at the Moore School, right
after the war (summer of 1946), on the design and construction of computers. According to
Eckert, it was after one of these lectures, which apparently Williams missed, that Williams
approached Eckert and the two of them got into a deep discussion about Eckert’s ideas on storing
binary information on the face of a cathode ray tube in the form of islands of charge. 36 It is no
small measure of the strength of Williams’ character that after such a short description of what
was then a totally untried technology, he should go back home and build a computer based solely
on this same technology. 37 A glimpse into the down-to-earth personality of Williams’ can be
obtained by reading his 1975 paper in The Radio and Electronic Engineer where he describes the
architectural style of the building which housed the Manchester computer as ‘late lavatorial’
because of the extensive used brown glazed bricks on the walls, and ended this same paper with
the following paragraph,

“My own interests drifted to other things like gas turbines. My


investigations of these didn’t set the world on fire, but they did set
the Department on fire literally!”

In the same vein, Hodges mentions that when Williams came back to America in 1949, fresh from
his victory at Manchester, he managed to scandalize the employees at IBM, whose motto then as

it is located. Or how about the fact that the person who was supposed to write the biography of Blumlein has not only
not done what he said he was going to do, but to add insult to injury he has decided not to let anyone else - even
members of the Blumlein family who authorized him to do the biography - inspect the archival material he has
collected. See,
Anon.; The Missing Life of Alan Blumlein; Electronics & Wireless World; Vol. 96; Nov. 1990; pp. 973-976.
M.G. Scroogie; The Genius of A.D. Blumlein; Wireless World;Vol. 66; Sept. 1960; pp. 451-456.
B.J. Benzimra; A.D. Blumlein - An Electronic Genius; IEE Electronics & Power; Vol. XX; Jun. 1967; pp. 218-224.
P.B. Vanderlyn; In Search of Blumlein: the Inventor Incognito; Journal of the Audio Engineering Society; Vol. 26;
No. 9; Sep. 1978; pp. 660-670.
36 J. Presper Eckert; The ENIAC; in N. Metropolis, J. Howlett, Gian-Carlo Rota (Eds.); A History of Computing in
the Twentieth Century; Academic Press; 1980; pp. 525-539.
J.P. Eckert, H. Lukoff, G. Smoliar; A Dynamically Regenerated Electrostatic Memory System; Proceedings of the
IRE;Vol.38; May 1950; pp. 498-510.
37 F.C. Williams, T. Kilburn; A Storage System for use with Binary-Digital Computing Machines; Proceedings of
the IEE; Vol. 98; Part 2; No. 30; 1949; pp. 81-100.
F.C. Williams; Early Computers at Manchester University; The Radio and Electronic Engineer; Vol. 45; No. 7; Jul.
1975; pp. 327-331.

©Tom Cuff
3/2/1993 Rev. 2.0 16 of 23

now is ‘THINK’, 38 when responding to how he had managed such a daunting task as the
Manchester computer, he replied that he simply went ahead, “…without stopping to think too
much.” 39

Williams’ CRT storage system represented binary data in terms of charged dots and
dashes on the CRT screen, hence the term electrostatic storage. 40 The outside face of the CRT
was covered in a fine metal mesh, which served as the sense electrode, and at the same time
allowed one to actually see the data stored on the tube face by observing the fluorescence of the
phosphor - the dots and dash being large enough to be easily discerned by eye. It should be
noted that the phosphor was not necessary; a bare glass screen would have worked just as well.
The point being that whatever the electron beam wrote on had to be isolated so that the charge
would not quickly dissipate or smear out. Zworykin’s Iconoscope had to have a screen made of a
highly electropositive metal (cesium) in order to sense, via the photoelectric effect, the incoming
light, but in order to prevent the resulting positive charge from leaking off, the metal was arranged
as globules (islands) on a high quality insulator (mica). Both the Iconoscope and the CRT
memory utilized a phenomenon called secondary electron emission to detect what was written
with light and electrons on their respective screens. Most materials when bombarded with high
energy electrons, respond by emitting low electrons called secondary electrons; a measure of
how fecund a substance is as a secondary electron emitter is delta,

 = numbers of secondary electrons/number of incident electrons

For metals,  is usually less than one or infrequently slightly more than one; insulators, on the
other hand, usually are copious emitters of secondary electrons having ’s as high as ten…

Cathode ray tube (CRT) storage, first proposed in America, but first realized in Britain,
was the fastest memory system in existence, smallest in physical size (especially when compared
to the mercury delay line or even the flip-flop memories with their thousands of individual tubes),
and the cheapest (standard CRTs worked well). In light of all these advantages, its two
disadvantages: nonrandom access and the dynamic nature of its storage which necessitated
periodic refreshing, must have seemed almost insignificant by comparison, 41 but even these two
shortcomings were eventually conquered.

38 Thomas J. Watson, Jr.; Father, Son & Co.; Bantam; 1990.


39 Andrew Hodges; Alan Turing, The Enigma; Simon and Schuster; 1983; pp. 390-391.
40 Frederic C. Williams; 12.6.1 – Williams [sic]-Tube Memory System; in Harry D. Huskey, Granino A. Korn
(eds.); Computer Handbook, First Edition; McGraw-Hill Book Company; 1962; pp. 12-34 - 12-41 (text) & 12-126 – 12-
128 (references).
41 A more subtle problem with CRT storage was encountered during the testing and use of the ILLIAC I, and
that is the problem of the so called read-around ratio. The symptoms were a change in the logical value of a
particular bit if its nearest neighbor bits were accessed a large number of times. The interest reader is referred to,
James E. Robertson; The ORDVAC and ILLIAC; in N. Metropolis, J. Howlett, Gian-Carlo Rota (Eds.); A History
of Computing in the Twentieth Century; Academic Press; 1980; pp. 347-364.
CRT storage tubes are not the only memory system to have a read-around problem, semiconductor memories
are routinely tested for this type of failure mode. This type problem in semiconductor memories is called a coupling
fault, see
J. Max Cortner; Digital Test Engineering; John Wiley & Sons; 1987; chap.10.

©Tom Cuff
3/2/1993 Rev. 2.0 17 of 23

The need to refresh the data was eliminated by the inclusion of a second electron gun -
the so-called holding gun. 42 Dodd et al. were responsible for coming up with an electrostatic
storage tube for the Whirlwind computer; the chief engineer on this project was Jay Forrester.
Why they chose to design the tube from the ground up as opposed to simply copying F. C.
Williams’ approach is not clear. While their tubes did not need to be refreshed, their mean time to
failure was very short, ~1 month, and their price was very high, ~$1000; Williams’ tubes in
contrast were off-the-self items, hence cheap, and extremely reliable.

The nonrandom nature of the access to the data stored on these electrostatic tubes was a
source of irritation to an RCA engineer named Jan Rajchman. Rajchman was irritated not
because he had a computer using an electrostatic tube for memory, but simply because he
believed that this shortcoming could and should be eliminated to fully realize the potential of this
idea of electrostatic storage. To make a long story short, he designed and built a electrostatic
storage tube that was static - no refresh required - and which had random access; he called it the
Selectron. 43 These tubes were incorporated into a copy of the IAS (Institute for Advanced
Studies) computer built by the Rand Corp., and named the JOHNNIAC in honor of you-know-who.
According to Rajchman his Selectrons gave years of uninterrupted service, right up to the time
when they were finally replaced by a magnetic core memory. 44 Why didn’t the Selectron or
some more advanced version of it give magnetic core memory a run for its money? The answer
is probably that core memory had a greater potential for high density memory storage using
present and foreseeable technologies.

Although electrostatic storage for use as computer memory is a thing of the past, this
technology has neither died nor stagnated. Analog storage oscilloscopes are still made by
Tektronix, Inc. and extensively used in industry, these ‘scope are a direct, though probably not
linear, descendant of the tubes of Dodd et al. and Haeff. Their holding gun is sometimes now
called a flood gun. And while the electrostatic storage tube was displaced by the magnetic core
memory because of the higher information density of the latter, it was not displaced on account of
its access time, which was much faster than that of the core. This speed advantage over more
conventional forms of memory has led to electrostatic storage dominating the transient digitizer
(wide bandwidth single-shot storage ‘scopes) market for frequencies ~1 GHz.

MAGNETIC CORE STORAGE

What we usually think of when we hear the term core memory, a rectangular array of
minute ferrite doughnuts, did not start out that way. Around 1948, Howard H. Aiken, then the

42 S.H. Dodd, H. Klemperer, P. Youtz; Electrostatic Storage Tube; Electrical Engineering; Vol. 69; Nov. 1950; pp.
990-995. [Dodd et al. were not the first people to employ a holding gun for the purpose of static, as opposed to
dynamic storage. This idea had been used earlier to create a storage kinescope, i.e. a television tube which could
display a still picture for a long time without having to constantly rewrite it. See,
A. V. Haeff; A Memory Tube; Electronics; Sep. 1947; pp. 80-83.]
43 J. Rajchman; The Selective Electrostatic Storage Tube; RCA Review; Vol. 12; Mar. 1951; pp. 53-97.
44 Jan Rajchman; Early Research on Computers at RCA; in N. Metropolis, J. Howlett, Gian-Carlo Rota (Eds.); A
History of Computing in the Twentieth Century; Academic Press; 1980; pp. 465-469.

©Tom Cuff
3/2/1993 Rev. 2.0 18 of 23

director of the Harvard Computation Laboratory, had started work on building his first completely
electronic computer, the Mark IV. Unlike today, when the designer of a computer can simply
order an off-the-shelf memory from a large number of manufacturers, early computer designers
had to design and build their own memories from scratch. 45 To this end, Aiken hired a recently
graduated doctoral student named An Wang, who would later go on to found Wang Laboratories.
One of Wang’s first jobs was to come up with a memory for the Mark IV. The job description was
not as open ended as it seems, since Aiken had specified that the memory was to be magnetic
but, and here was the catch, it was not to employ mechanical motion; the Mark III computer,
which was an electromechanical computer, had used a rotating drum magnetic memory, and
obviously this time Aiken wanted something faster, small, and more reliable. According to Wang,
the idea of storing information on a toroid of magnetic material came to him almost immediately.
46 However, the process of reading out the direction of the magnetization was inherently
destructive, and this at first disappointed Wang since he had hoped to be able to read the toroid’s
state without affecting it. Before continuing, it would be helpful to examine in detail how one goes
about writing and reading a magnetic toroid. By using a magnetic material with a square
hysteresis (B versus H) curve, one can realize a bistable physical system; the two states
correspond to saturation magnetizations in two opposite directions. Let’s say that the clockwise
(cw) or positive saturation magnetization of the core is associated with state, “0”, and the
counterclockwise (ccw) or negative saturation magnetization with state “1”. The only way to read
the current state of the core is to expose it to one of the two saturation magnetic intensities, +H s
or -Hs; we’ll pick the value +Hs. If the core was in the “0” state, which was attained through
exposure to a magnetic intensity +Hs, reading it with +Hs will not cause its state (magnetization) to
change, and since there will be no resulting flux change around the sense line, no output signal
will be produced. On the other hand, if the core was in the “1” state, reading it with +H s will cause
its magnetization to flip completely around to the “0” state, and the concomitant change in flux will
generate a large output signal on the sense line. You might ask, why not read the core with, for
example, a positive value of the magnetic intensity less than the saturation value, so that the
reading would be nondestructive? The answer is that the square nature of the hysteresis curve
would prevent any significant flux change, regardless of the initial state of the core magnetization,
when reading via a subsaturation value of magnetic intensity.

45 A good way to appreciate how desperate people were in their pursuit of candidate technologies for memory
devices is to peruse the Quarterly Reports (1 thru 7) of the following reference,
J.R. Bowman, F.A. Schwertz, A. Milch, B. Moffat, R.T. Steinback, L. Nickels, B.O. Marshall; Computer
Components Fellowship no. 437; Mellon Institute of Industrial Research, Pittsburgh, PA; 1950-52; PB (Publication
Board) #109935-109940.
46 An Wang, Eugene Linden; Lessons, An Autobiography; Addison-Wesley Publishing Co.; 1986. [Note, for
reasons which I cannot understand, Wang’s contribution to the field of computer science have failed to make it into
any of the standard books on the history of computers; Aiken, surprisingly, fared much better, his name is almost
always mentioned. Although Wang’s name seems to have been left out of the general reviews, his name has been
enshrined, albeit erroneously, in word-of-mouth accounts of how he invented core memory. In truth he invented the
memory core (toroid), not core memory [lexicographers should take note of the fact that a ‘firetruck’ is not same thing
as a ‘truck fire’ anymore than a ‘showboat’ is a ‘boat show’]. The word-of-mouth account of Wang’s contribution to
core memory is an example of what Jan Harold Brunvand calls ‘urban legends’ see for example,
Jan H. Brunvand; The Vanishing Hitchhiker: American Urban Legends and Their Meanings; Norton; 1981.
Idem; The Choking Doberman & Other “New” Urban Legends; Norton; 1984.
Idem; The Mexican Pet: More “New” Urbam Legends & Some Old Favorites; Norton; 1988.
Idem; Curses! Broiled Again; Norton; 1990.

©Tom Cuff
3/2/1993 Rev. 2.0 19 of 23

Upon further thought, Wang realized that even though his memory would not be static, the
toroids could be arranged in such a way that the act of destructively reading the toroid could be
used to transfer the preread state of the toroid to another toroid in bucket brigade fashion. He
thus fashioned the toroids into a discrete magnetic delay line, 47 where the information was
injected serially into one end (input), marched down the line of toroids to the other end (output),
and there recovered and reshaped by suitable electronics and recirculated back to the input end
of the delay line. 48 As the old engineering saying goes, “Nothing, but nothing is simple.”
Building a discrete magnetic line is easier said than done. To appreciate the problems of
implementing this idea consider the following. There are two main obstacles to be overcome
when designing a discrete magnetic delay line, first, data must be made to move in only one
direction, its normal propensity is to move in both directions, second, assuming we can get the
data to move in the desired direction, there is the danger that the data may move too fast in this
direction giving rise to a condition known as ‘racing’. Wang and Woo surmounted the ‘racing’ by
employing a master/slave arrangement of the cores, i.e. by using two cores per stage, and
prevented backwards flow of data with semiconductor diodes strategically located in the various
windings connecting the cores.

Wang eventually took out a patent on the idea of storing binary data on magnetic toroids
or core as they are more commonly known, and on his idea of arranging them into a discrete
magnetic delay line; according to his autobiography, he asked Woo and Aiken if they wanted to
be part of his patent application, but they apparently declined. Wang’s patent was eventually
challenged by an inventor name Frederick W. Viehe, a public works inspector for the city of Los
Angeles, who filed a patent interference against a number of claims in Wang’s patent. After the
dust had settled, what with interference hearings and appeals, Viehe was granted an interference
on one of the Wang’s claims. According to Wang, IBM, which by this time had bought both
Wang’s and Viehe’s patents, played the two inventors against each other before and during the
hearings and appeals in what was an ultimately successful attempt to drive down the asking price
for the two patents by playing on the fears of the inventors that one of the their patents would be
declared invalid in light of the other.

A short time after Wang had perfected his discrete magnetic delay line and patented the
ideas behind it, an engineer name Jay W. Forrester took Wang’s idea of storing ones and zeros

47 Wang’s discrete magnetic delay line was the only one of its kind (i.e. discrete) that I know of, but there were
many versions of the continuous type of magnetic delay line. In one popular scheme, the continuous magnetic delay
line consists of a fine wire made of a special magnetic alloy and stretched between two acoustically damped posts,
the input end of the wire pass through a small solenoid and the output end also passes through a similar solenoid
and, in addition, the output end had a small bar magnet located above the solenoid and parallel to the stretched fine
wire. A current pulse through the input solenoid causes local magnetostriction to occur in the stretched fine wire, this
strain wave travels down the wire to the other end where it is detected via the inverse magnetostrictive (Villari) effect.
For further information about these magnetostrictive delay lines see,
W. Renwick, A. J. Cole; Digital Storage Systems; Chapman & Hall Ltd.; 1971; chap 2, pp. 13-32.
48 A. Wang, W. D. Woo; Static Magnetic Storage and Delay Line; Journal of Applied Physics; Vol. 21; Jan. 1950;
pp. 49-54. [Note, the use of the word “static” in this paper is as a synonym for “nonvolatile”. Wang and Woo were
emphasizing that unlike the cathode ray tube storage and acoustic delay line storage, their discrete magnetic delay
line memory did not lose its data if the power went off.
In what has to be one of the best examples of making a virtue out of a liability, Wang and Woo state in their
paper that, “The present upper limit of the speed of propagation [in the discrete magnetic delay line] is about 35,000
digits per second, and there is no lower limit [emphasis added].”]

©Tom Cuff
3/2/1993 Rev. 2.0 20 of 23

on toroids of magnetic material one step further and devised an arrangement of the toroids which
allowed the resulting storage to be truly random, as opposed to the serial storage found in Wang’s
approach. 49 Forrester idea was to locate the magnetic toroids at the intersections of a
rectangular mesh of insulated wires with a third insulated wire, the sense threaded through all the
toroids. By putting the appropriate current through a particular x-direction and y-direction wire,
only that toroid at the intersection of these two wire will be read. As was the case with Wang’s
discrete magnetic delay delay, Forrester’s configuration also employed a destructive read, i.e. he
would read the toroids with a current -2I, sufficient to cause magnetization to go to the “0” state
regardless of its initial state; if the initial state of the toroid was “0” then this state remained
unchanged and no current was induced on the sense line, however, if the initial state was “1” the
magnetization would flip to “0” thus inducing a current onto the sense line. Because of the
destructive nature of the reading process, it’s immediately followed by what is euphemistically
called a refresh process during which the origin state of the toroid is reinstated - the refresh is
inhibited in the case where the initial state is “0”, since this state is not affected by the previous
read. 50

As initially formulated, magnetic cores were slow when compared with the CPU cycle
time, this inequality was only aggravated when transistors and later IC (Integrated Circuits) were
introduced. The lethargy exhibited by the cores was puzzling at first, since theoretical
considerations indicated the movement of the domain walls - the magnetization reversal
mechanism - should be faster than was being observed. In addition, the magnetic intensities, H,
required to induce domain wall growth was found experimentally to be smaller by a factor of about
100 than what had been theoretically calculated. It turned out that the reason behind the slower
than calculated movement of the domain walls and smaller than theoretically required magnetic

49 J. W. Forrester; Digital Information Storage in Three Dimensions Using Magnetic Cores; Journal of Applied
Physics; Vol. 22; Jan. 1951; pp. 44-48. [The second paper referenced by Forrester is Wang and Woo’s 1950 article
in the Journal of Applied Physics, and legend has it that he also references their paper in his patent for core memory
citing them as prior art. It is for these reasons among others that it is almost impossible to fathom how and why
Wang and Woo’s names have been left out of most books and articles on the history of computers. For example, the
1962 book edited by Huskey and Korn has an extensive section on memory cores and core memory in Section
(Chapter) 12, but the Reference section for this chapter does not have a single reference to articles, patents, etc. by
either Wang or Forrester.
Raymond Stuart-Williams; 12.7 - Magnetic-Core Storage and Switching Techniques; in Harry D. Huskey,
Granino A. Korn (eds.); Computer Handbook, First Edition; McGraw-Hill Book Company; 1962; pp. 12-41 - 12-106
(text) & 12-126 – 12-128 (references).
The situation is only slightly better in the 1959 reference volumes edited by Grabbe, Ramo, and Wooldridge,
which cites the 1951 Journal of Applied Physics paper by Forrester, but does not list any citations for Wang either in
Chapter 15 – Magnetic Core Circuits or in Chapter 19 – Storage..
Isaac L. Auerbach; Chapter 15 - Magnetic Core Circuits & David R. Brown, Jack I. Raffel; Chapter 19 – Storage;
in Eugene M. Grabbe, Simon Ramo, Dean E. Wooldridge (eds.); Handbook of Automation, Computation, and Control,
Volume 2 – Computers and Data Processing; John Wiley & Sons, Inc.; 1959; pp. 15-01 – 15-25 (Chapter 15 –
Magnetic Core Circuits) & pp. 19-01 – 19-35 9Chapter 19 – Storage).
Note, the aforementioned reference volume is the second volume of a three volume set; the other volumes of the
set are as follows.
Eugene M. Grabbe, Simon Ramo, Dean E. Wooldridge (eds.); Handbook of Automation, Computation, and
Control, Volume 1 – Control Fundmentals; John Wiley & Sons, Inc.; 1958.
Eugene M. Grabbe, Simon Ramo, Dean E. Wooldridge (eds.); Handbook of Automation, Computation, and
Control, Volume 3 – Systems and Components; John Wiley & Sons, Inc.; 1961.]
50 W. N. Papian; New Ferrite-Core Memory; Electronics; Vol. 28; March 1955; pp. 194-197.

©Tom Cuff
3/2/1993 Rev. 2.0 21 of 23

intensity was the same as why the ultimate tensile strength of materials never comes close to
theoretical predictions, defects. 51 A. A. Griffith and Benjamin Lockspeiser showed that if one
takes a substance such as glass and forms extremely fine filaments, by the simple expedient of
heating and pulling, the resulting filaments have an ultimate tensile strength approximating the
expected theoretical maximum; the smaller the cross sectional area, the closer to the calculated
value. 52 Given that the number of defects per unit volume is a constant, the smaller the cross
sectional area of the filament the smaller the probability of a defect being present. With this
epiphany under their belts, the engineers began reducing the size of the ferrite cores, and for their
trouble getting better, i.e. fast, memories. However, there were two concomitant problems
associated with this downsizing: difficulty in manufacturing and heating became a problem.
Threading the new smaller cores could no longer be performed by hand, it required very
expensive automatic machinery. Alternative fabrication strategies included plated wires and thin
films employing photolithography techniques that were being developed for the nascent IC
industry. The now higher maximum access rates (~2.8 µs) coupled with the larger required
magnetization current resulted in heating above the Curie point of the magnetic material in the
case of repeated accessing of the same memory location, which made forced air cooling a
necessary adjunct. 53

Inevitably, as has been the case with most commercially viable ideas, Forrester’s patent
was challenged. Jan A. Rajchman, who claimed that he was the real ‘Father’ of core memory, 54
filed an interference stating that Forrester’s patent interfered with his [Rajchman’s] patent. 55
Rajchman never succeeded in being declared the ‘Father’ of core memory, but this did not stop
him from going on and improving the design of core memory by inventing the Transfluxor. “What
in God’s name is a Transfluxor?”, you say. Welllll, let me tell you. A Transfluxor is Jan A.
Rajchman and A. W. Lo’s version of the memory core which could be read non-destructively. 56

51 S.S. Brenner; Metal “Whiskers”; Scientific American; Vol. 203; No. 1; 1960; pp. 64-72.
52 James Edward Gordon; The Science of Structures and Materials; Scientific American Library; 1988; chap. 4.
53 J. H. Pomerene; Historical Perspectives on Computers - Components; in AFIPS Conference Proceedings;
Vol. 41; Pt. II; 1972; Dec. 5-7, 1972; pp. 977-983.
54J. Rajchman; Static Magnetic Matrix Memory and Switching Circuits; RCA Review; Vol. XIII; Jun. 1952; pp.
183-201.
Idem; A Myriabit Magnetic-Core Matrix Memory; Proceedings of the IRE; Vol. 41; Oct. 1953; pp. 1407-1421.
55 Richard E. Matick; Computer Storage Systems and Technology; John Wiley & Sons; 1977; chap. 1, p. 15.
[Matick cites the following document “Brief for Forrester on Final Hearing” Interference #88269, Jay W. Forrester vs.
Jan A. Rajchman. According to Matick, the interference was settled out of court with Forrester retaining claim to the
x-y selection scheme and Rajchman getting the credit for the inhibit concept.]
56 J.A. Rajchman, A.W. Lo; The Transfluxor -- a Magnetic Gate with Stored Variable Setting; RCA Review; Vol.
16; Jun. 1955; pp. 303-311.
Idem; The Transfluxor; Proceedings of the IRE; Vol. 44; Mar. 1956; pp. 321-332. [Note, even in the case of the
Transfluxor, Rajchman cannot claim “…no prior art!”, since as he states in the second footnote of his March 1956,
Proceedings of the IRE paper, other groups had also been looking as the using multi-aperture cores for memory
storage and other applications,
R.L. Snyder; Magnistor Circuits; Electronic Design; Vol. 3; Aug. 1955; pp. XXXX-XXX.
R. Thorensen, W.R. Arsenault; A New Nondestructive Read for Magnetic Cores; paper presented at the Western
Joint Computer Conference; Mar. 1955.]

©Tom Cuff
3/2/1993 Rev. 2.0 22 of 23

Although we will not have time to examine them in this document, core memory not only
led to the Transfluxor memories, they also inspired planar thin-film memories, plated-wire
memories, Twistor memories, and magnetic bubble memories. 57

FIGURE 2A – An Wang, Inventor of the Memory Core, and Jay Forrester, Inventor of Core
Memory (Source: Photo by Tom Cuff at the Computer History Museum, Mountain View,
California).

57 B. Kazan, M. Knoll, W. Harth; Electronic Image Storage; Academic Press; 1968; pp. 190-202. [Note, even
though this book discusses core memories, or what it terms “coincident-current (matrix) memories”, which are made
up of memory cores, there is no reference to An Wang either in the text or in the references.]
Emerson W. Pugh; Technology assessment; Proceedings of the IEEE; Vol. 73; No. 12; December 1985; pp.
1756-1763.

©Tom Cuff
3/2/1993 Rev. 2.0 23 of 23

FIGURE 2B – A Plane of Core Memory made of Memory Cores (Source: Photo by Tom
Cuff at the Computer History Museum, Mountain View, California).

Core memory became for a time the de facto standard memory for commercial computer,
usurping delay lines and electrostatic storage systems. But eventually, it too was superseded by
semiconductor memories in most, but not all applications. Today core memories and other
magnetic type memories such as bubble memory and plated wire memories are still the method
of choice when it comes to computer systems which have to operate in an ionizing radiation
environment. Re-entry vehicles such as the Mk12A (used on the Minuteman III ICBM (Inter-
Continental Ballistic Missile)) and the Mk21 (used on MX ICBM), probably employ core memories
as a safe haven for the results of their computations; the MILSTAR military communications
satellite might use magnetic bubble memory in the same way as they are used in the re-entry
vehicles; and, of course, the computers on the space shuttles such as the Challenger, before it
explosively disassembled, make extensive use of core memory as the main memory.

©Tom Cuff

View publication stats

You might also like