Analog and Hybrid Computer Programming
Analog and Hybrid Computer Programming
Algorithms
Design and Analysis
Sushil C. Dimri, Preeti Malik, Mangey Ram, 2021
ISBN 978-3-11-069341-6, e-ISBN (PDF) 978-3-11-069360-7
Bernd Ulmann
Analog and Hybrid
Computer Programming
2nd edition
Mathematics Subject Classification 2010
Primary: 34-04, 35-04; Secondary: 92C45, 92D25, 34C28, 37D45
Author
Prof. Dr. Bernd Ulmann
Schwalbacher Str. 31
65307 Bad Schwalbach
[email protected]
ISBN 978-3-11-078759-7
e-ISBN (PDF) 978-3-11-078773-3
e-ISBN (EPUB) 978-3-11-078788-7
www.degruyter.com
For Rikka.
“An analog computer is a thing of beauty and a joy forever.”1
This book would not have been possible without the support and help of many
people. First of all, I would like to thank my wife Rikka Mitsam, who never
complained about the many hours I spent writing this book. In addition to that,
she did a lot of proofreading and post-processed all of the oscilloscope screenshots
and various pictures to make them print-ready.
I am also greatly indebted to Dr. Chris Giles, who not only gave much con-
structive criticism but also pointed out lots of additional interesting literature and
programming examples. He also extended Occam’s razor into Occam’s chainsaw
during the process of proofreading and enhancing this book. :-)
In addition, I wish to express sincere thanks to Dr. Duncan Cadd, Maikel
Hajiabadi, Felix Letkemann, Bernd Johann, Nicole Matje, Oliver Bach,
Jens Flemmer, Dr. Christian Kaminski, Dr. Robert Schorr and Ian S.
King, who have proofread this book and offered helpful advice. Discussions with
Jens Breitenbach also spotted numerous errors and inconsistencies which were
rectified accordingly.
I am also indebted to Mr. Mirko Holzer, who programmed the digital por-
tion of the hybrid computer setup described in section 7.5.
Last but not least, I would like to thank Tibor Florestan Pluto for his
permission to use some of his photographs in this book (figure 6.63 and the title
picture).
All of the worked examples in the book have been implemented on an Analog
Paradigm Model-1 analog computer for two reasons. First, the author is one of
the main developers of this system and second, the machine seems to be the only
analog computer currently available on a commercial basis. All of the examples
can be (and have been to a large degree) programmed on other machines, like the
X Acknowledgments and disclaimer
This second edition of “Analog and Hybrid Computer Programming” has been
prepared in response to the vastly increased interest in analog computing during
recent years. The errors and typos found in the first edition have been corrected
and additional topics have been included in the book.
The author is especially indebted to Dr. Chris Giles for his invaluable sup-
port. He not only did a terrific job proofreading this 2nd edition and was always a
great discussion partner when it came to the nitty-gritty details. Nicole Matje
and Oliver Bach also did a great job proofreading. Stefan Wolfrum also
spotted and corrected quite some errors.
The author is also indebted to Nick Baberuxki and Maikel Hajiabadi for
the hybrid computing examples shown in chapters 7.6 and 7.7 and their valuable
overall feedback.
Since the first edition was published, a new analog computer, THE ANALOG
THING (THAT ), has been brought to the market as an open hardware project.
Consequently, many examples found in this book have been implemented on this
small analog computer, which is also described in detail in the introductory chap-
ters.
The following new topics and examples have been included in this second
edition:
– Minimum/maximum circuits
– Stieltjes integral
– Transfer functions
– Exponentially mapped past
– SEIR model
XII Preface to the 2nd edition
– Bessel functions
– The SQM model
– Euler spiral
– The Hindmarsh-Rose model of neuronal bursting and spiking
– The simulation of the flight of a glider
– Elastic pendulum
– Making music with analog computers
– Neutron kinetics
– Analog sorting
– Solving systems of linear equations with a hybrid computer approach
– Solving partial differential equations with random walks
– A simple hybrid controller for THE ANALOG THING
Contents
1 Introduction 1
1.1 What is an analog computer? 1
1.2 Direct vs. indirect analogies 2
1.3 A short history of analog computing 3
1.4 Characteristics of analog computers 8
2 Computing elements 11
2.1 Machine units 11
2.2 Summer 12
2.3 Integrators 17
2.4 Free elements 24
2.5 Potentiometers 25
2.6 Function generators 30
2.7 Multiplication 33
2.8 Comparators and switches 35
2.9 Input/output devices 36
4 Basic programming 47
4.1 Radioactive decay 49
4.1.1 Analytical solution 50
4.1.2 Using an analog computer 51
4.1.3 Scaling 54
4.2 Harmonic functions 56
XIV Contents
4.3 Sweep 61
4.4 Mathematical pendulum 62
4.4.1 Straightforward implementation 63
4.4.2 Variants 64
4.5 Mass-spring-damper system 65
4.5.1 Analytical solution 66
4.5.2 Using an analog computer 69
4.5.3 RLC-circuit 70
5 Special functions 73
5.1 Stieltjes integral 73
5.2 Inverse functions 74
5.2.1 Square root 75
5.2.2 Division 76
5.3 f (t) = 1/t 77
5.4 Powers and polynomials 77
5.5 Low pass filter 78
5.6 Triangle/square wave generator 80
5.7 Ideal diode 80
5.8 Absolute value 82
5.9 Limiters 83
5.10 Dead zone 84
5.11 Hysteresis 85
5.12 Maximum and minimum 85
5.13 Bang-bang 86
5.14 Minimum/maximum holding circuits 87
5.15 Sample & Hold 88
5.16 Time derivative 89
5.17 Time delay 91
5.17.1 Historic approaches to delay 92
5.17.2 Digitization 93
5.17.3 Sample and hold circuits 94
5.17.4 Analog delay networks 96
5.18 Transfer functions 103
5.19 Exponentially mapped past 103
6 Examples 109
6.1 Displaying polynomials 109
6.2 Chemical kinetics 110
6.3 SEIR model 114
6.4 Damped pendulum with external force 117
6.5 Mathieu’s equation 119
Contents XV
AC Alternating Current
ADC Analog to Digital Converter
AI Artificial Intelligence
BBD Bucket Brigade Device
DAC Digital to Analog Converter
DC Direct Current
DEQ Differential Equation
DDA Digital Differential Analyzer
DVM Digital Voltmeter
EMP Exponentially Mapped Past
FET Field Effect Transistor
FPGA Field Programmable Gate Array
GND Ground
HPC High Performance Computing
IC Initial Condition
LF Line Feed
NG Noise Generator
ODE Ordinary Differential Equation
OP Operate
PDE Partial Differential Equation
RAM Random Access Memory
RC Resistor Capacitor
RLC Resistor Inductor Capacitor
SEIR Susceptible Exposed Infected Recovered
SJ Summing Junction
THAT The Analog Thing
Introduction
1
1.1 What is an analog computer?
A book about programming analog and hybrid computers may seem like an
anachronism in the 21st century – why should one be written and, even more
important, why should you read it? As much as analog computers seem to have
been forgotten, they not only have an interesting and illustrious past but also an
exciting and promising future in many application areas, such as high performance
computing (HPC for short), the field of dynamic systems simulation, education
and research, artificial intelligence (biological brains operate, in fact, much like
analog computers), and, last but not least, as coprocessors for traditional stored-
program digital computers, thereby forming hybrid computers.
From today’s perspective, analog computers are mainly thought of as being
museum pieces and their programming paradigm seems archaic at first glance. This
impression is as wrong as can be and is mostly caused by the classic patch field
or patch panel onto which programs are patched in form of an intricate maze of
wires, resembling real spaghetti “code”. . . On reflection, this form of programming
is much easier and more intuitive than the algorithmic approach used for stored-
program digital computers (which will be just called digital computers from now
on to simplify things). Future implementations of analog computers, especially
those intended as coprocessors, will probably feature electronic cross-bar switches
instead of a patch field. Programming such machines will resemble the program-
ming of a field programmable gate array (FPGA), i. e., a compiler will transform
a set of problem equations into a suitable setup of the crossbar-switches, thus
configuring the analog computer for the problem to be solved.
The notion of an analog computer has its roots in the Greek word nlogon
(“analogon”), which lives on in terms like “analogy” and “analogue”. This aptly
characterizes an analog computer and separates it from today’s digital computers.
2 1 Introduction
The latter have a fixed internal structure and are controlled by a program stored
in some kind of random access memory.
In contrast, an analog computer has no program memory at all and is pro-
grammed by actually changing its structure until it forms an analogue, a model,
of a given problem. This approach is in stark contrast to what is taught in pro-
gramming classes today (apart from those dealing with FPGAs). Problems are
not solved in a step-wise (algorithmic) way but instead by connecting the various
computing elements of an analog computer in a suitable manner to form a cir-
cuit that serves as a model of the problem under investigation. Figures 1.1 and
1.2 illustrate these two fundamentally different approaches to computing. While
a classic digital computer works more or less in a sequential fashion, the comput-
ing elements of an analog computer work in perfect parallelism with none of the
synchronization issues that are often encountered in digital computing.
Fig. 1.1. Principle of operation of a stored-program Fig. 1.2. Structure of an analog computer
digital computer (see [Truitt et al. 1960, p. 1- (see [Truitt et al. 1960, p. 1-41])
40])
The idea of analog computing is, of course, much older than today’s predominantly
algorithmic approach. In fact, the very first machine that might aptly be called
an analog computer is the Antikythera mechanism, a mechanical marvel that was
built around 100 B. C. It has been named after the Greek island AntikÔjhra (An-
tikythera), where its remains were found in a Roman wreck by sponge divers in
1900. At first neglected, the highly corroded lump of gears aroused the interest of
Derek de Solla Price, who summarized his scientific findings as follows:2
“It is a bit frightening to know that just before the fall of their great civilization the
ancient Greeks had come so close to our age, not only in their thought, but also in their
scientific technology.”
Research into this mechanism, which defies all expectations with respect to an an-
cient computing device, is still ongoing. Its purpose was to calculate sun and moon
positions, to predict eclipses and possibly much more. The mechanism consists of
more than 30 gears of extraordinary precision yielding a mechanical analogue for
the study of celestial mechanics, something that was neither heard of or even
thought of for many centuries to come.3
Slide rules can also be regarded as simple analog computers as they allow the
execution of multiplication, division, rooting, etc., by shifting of (mostly) loga-
rithmic scales against each other. Nevertheless, these are rather specialized analog
computers, just as planimeters, which were (and to some extent still are) used to
measure the area of closed figures, a task that frequently occurs in surveying but
also in all branches of natural science and engineering.
Things became more interesting in the 19th and early 20th centuries with the
development and application of practical mechanical integrators. Based on these
developments, William Thomson, later Lord Kelvin, developed the concept
of a machine capable of solving differential equations. Although no usable com-
puter evolved from this, his ideas proved very fruitful. Specifically, his approach
to programming such machines is still used today and called the Kelvin feedback
technique.4
Figure 1.3 shows a mechanical analog computer, called a differential analyzer.
On both sides of the long table-like structure in the middle of the picture, various
computing elements such as integrators (discernible by the small horizontal disks),
differential gears, plotter tables, etc., can be seen. The elongated structure in the
middle is the actual interconnect of these computing devices, which consists of a
myriad of axles and gears. Programming such a machine was a cumbersome and
time consuming process as the interconnection structure had to be more or less
completely dismantled and rebuilt every time the machine was configured to solve
the differential equations which describe the new problem.
Figure 1.4 shows a simple setup of a differential analyzer to integrate a function
given in graphical form. A central motor, shown on the left, drives all computing
elements of the machine. On the upper left an input table is visible. It consists of
a magnifier with crosshairs, which is mounted in such a way that it will be moved
by the central motor horizontally while its vertical position is controlled manually
by a hand crank, which is turned so that the crosshairs always follow the line of
the input function.5
At the heart of this setup is an integrator shown at the bottom of the figure.
Basically, it consists of a rotating flat disk driven by the central motor and a
friction-wheel rolling on the surface of the disk. The radial position of this wheel
on the disk is now controlled by the vertical component of the crosshairs on the
input table. Given some angular velocity of the rotating disk, the angular velocity
4 Lord Kelvin is often cited as having proposed the use of analog computers for fire control,
but although mechanical differential analyzer techniques were successfully employed for pur-
poses such as naval gun fire control in the early 1900s, it took Vannevar Bush to realize that
these components could be configured into a general purpose computer.
5 A steady hand is required for this task, which was quickly automated in order to eliminate
this rather unpredictable source of error during a computation.
1.3 A short history of analog computing 5
Fig. 1.3. Vannevar Bush’s mechanical differential analyzer (source: [Meccano 1934, p. 443])
Fig. 1.4. A simple differential analyzer setup for integration (cf. [Karplus et al. 1958, p. 190],
[Soroka 1962, p. 8-10])
6 1 Introduction
Fig. 1.5. Integrator from the Oslo differential analyzer (see [Willers 1943, p. 237])
where τ represents the machine time (more about that later) – in this case the
rotation of the horizontal disk – and f (τ ) controls the radial position of the friction-
wheel. The integrator is running during the time interval [0, T ]. Figure 1.5 shows
an actual implementation of the integrator which was used in the Oslo differential
analyzer.
The output from the friction-wheel is then used in this setup to control the
vertical position of the output table’s6 (upper right of the picture) pen position,
while its horizontal position is controlled by the central motor. The resulting figure
is the graph of the integral over the input function.
Mechanical differential analyzers like this one were only used for a short period
of time as their disadvantages could not easily be overcome. Their setup is cumber-
some and time-consuming, the many mechanical parts require a lot of maintenance
work, and their speed of computation is limited due to the non-negligible inertias
of the rotating and moving parts.
There were some attempts to build electro-mechanical differential analyzers
in which basic computing elements were still purely mechanical while their in-
7 A synchro (also known as a Selsyn) is basically a rotary transformer with its primary
winding on a rotor, which is surrounded by typically three secondary windings. When the
primary is fed with an AC signal, signals corresponding to the angular position of the rotor
are induced in the stator windings, which can then be used to determine the angle of the rotor.
Two synchros can be connected back to back, forming a Transmitter and Control Transformer
pair, which can form the heart of a servo system to perform torque multiplication, one of the
biggest challenges in building a mechanical differential analyser. Arnold Nordsieck used
these devices in his differential analyzer, see [Nordsieck 1953] and [Brock 2019].
8 1 Introduction
ing the same techniques that are still used today. More information on the history
of analog computing can be found in [Ulmann 2023]8 and [Small 2001].
Fig. 1.6. Helmut Hoelzer’s general purpose analog computer after World War II
Computing elements
2
The following sections introduce the basic elements which comprise an electronic
analog computer. Furthermore, the notion of the machine unit will be introduced,
because the representation of values is of central importance for all of the following
concepts. The examples shown have been implemented on an Analog Paradigm
Model-1 analog computer.
Voltages or currents are the natural way of representing values within a calcu-
lation on an analog computer. Since the majority of historic and modern analog
computers use voltages instead of currents, the following sections are restricted to
this technique.
Obviously, values represented by voltages are limited by some minimum/maxi-
mum voltages, known as machine units, m− and m+ , which are fixed for a given
analog computer. Historic vacuum tube based machines often used units of ±100 V
and sometimes ±50 V, while later and modern analog computers feature machine
units of ±10 V and sometimes even as low as ±5 V.
All voltages representing variables in a computer setup are bound by these
machine units, so it is normally necessary to scale a problem to be solved on an
analog computer to avoid an overload condition in which a variable exceeds the
machine unit voltage. If this happens, typically an overload indicator will be lit,
identifying the affected computer element. In addition to this, the computer run
can be halted automatically to determine the cause of the overload condition.
Overloads normally result from erroneous scaling or patching and do not harm
the computer but will impair or invalidate the computed results.
12 2 Computing elements
Since the machine units are of utmost importance in an analog computer, they
are typically highly stabilized with temperature compensated reference elements.
With machine units of ±10 V the computing elements are typically powered by a
±15 V supply to leave some headroom to detect overloads, etc. So in the case of
an overload, the output voltage of the affected element can reach values as high
as about ±15 V on a modern analog computer.
Scaling a problem to be solved on an analog computer has two objectives:
1. Guarantee that no variable exceeds the limits imposed by the machine units.
2. Make the best use of the available interval [m− , m+ ] for each variable of a
computer setup to minimize the unavoidable errors caused by the computing
elements.
2.2 Summer
The simplest active element of an electronic analog computer is the summer. Its
abstract symbol is shown in figure 2.1. A summer yields the negative sum of the
voltages applied to its inputs at its output, labeled eo in the figure. Each input
has a weight, a fixed multiplicative factor applied to the input. Typical weights
are 1 and 10, while some machines also feature values of 4 or 5. If no weight is
noted next to an input, it is assumed to be 1. Accordingly, all three inputs e1 , e2 ,
and e3 in figure 2.1 are weighted by 1.
To understand the behavior of a summer a look at its implementation is
necessary. Like most other analog computer elements, it is based on an operational
amplifier, opamp for short, the graphical symbol of which is shown in figure 2.2.
SJ Q
e3 −Q
Q
eo Q
e2
+
e1
Fig. 2.1. Abstract symbol of a summer with Fig. 2.2. Graphical symbol of an operational
three inputs e1 , e1 , e2 and summing junction amplifier
input SJ
An operational amplifier has two inputs, one inverting and one non-inverting,
denoted by − and + in the figure. It yields the sum of the values applied to
these inputs, amplified by its large (ideally infinite) open-loop gain A that is
characteristic for a particular operational amplifier. Typically, gains of 105 to 109
can be achieved.11 The use of this type of amplifier in analog computers gave rise
to the name operational amplifier, as they form the basis of computing elements
implementing certain mathematical operations.
In a typical analog computer circuit, the non-inverting input of the operational
amplifiers is grounded, i. e., connected to the analog ground rail, usually denoted
by GND, which is at the potential representing the value zero; this effectively
disables this input.12
To build a summer based on an operational amplifier the concept of negative
feedback is essential. This technique was pioneered by Harold Stephen Black
in 1927. The basic idea is to use part of the signal at the output of the operational
amplifier and feed it back to its inverting input, thus basically controlling the
overall behavior of the resulting circuit by the feedback circuit, instead of relying
on the characteristics of the bare amplifier. This idea is central to nearly all oper-
ational amplifier circuits, including analog computing elements, such as summers
and integrators. Figure 2.3 shows the basic circuit of an operational amplifier with
negative (resistive) feedback.
This simple circuit has a single input ei , which is connected to the inverting
input of the operational amplifier by the resistor Ri . The output signal eo is also
connected to the inverting input via a feedback resistor Rf . Since all inputs as well
as the feedback path are connected to the inverting input, this is called summing
11 Operational amplifiers are rather complex devices. More in-depth information can be found
in [Jung 2006].
12 In classical analog computers, this non-inverting input was normally not connected to
ground directly but used to implement an active drift-compensation. More details on this can
be found in [Goldberg et al. 1954], [Korn et al. 1964, p. 137 et seq.], [Ulmann 2023, p. 80 et
seq.], etc.
14 2 Computing elements
Rf
Ri
ei
eo
Since A is typically very large14 and Rf /Ri is typically ≤ 10, the denominator
of (2.3) can normally be neglected, yielding the simplified form
Rf
eo = − ei (2.4)
Ri
describing the output voltage of the feedback circuit shown in figure 2.3.15
This shows that the behavior of this circuit is basically determined by the
resistors at the input and in the feedback path. If more input resistors are added
as shown in figure 2.4, a useful summing circuit results whose overall behavior is
readily described by
n
X ei eo
=− . (2.5)
Ri Rf
i=1
The voltage at the output of this circuit is thus the negative of the sum of the
voltages at its inputs. The ratios
Rf
ai =
Ri
define the weights of the various inputs. Typical values for ai are 1 and 10. If
unusual values are required for a certain setup, the necessary resistors can be
connected to the summing junction SJ, thus effectively extending the number of
inputs of the summer.
14 In fact, classical high-precision operational amplifiers used in analog computers had gains
of up to A = 109 .
15 A more informal approach to the behavior of such circuits is to assume that the open-loop
gain of the operational amplifier is extremely large, therefore, the voltage at the summing
junction is approximately zero. Since the input current of the amplifier is negligible, applying
Kirchhoff’s law to the currents at the summing junction shows that the current through
the feedback element is minus the sum of the currents through the input resistors. Applying
Ohm’s law then readily yields the output voltage of the computing element.
16 2 Computing elements
Rf
SJ
R1
e1
R2 eo e1 eo
e2
..
.
Rn
en
Fig. 2.4. Summer with several inputs based Fig. 2.5. Graphical representation of an open
on an operational amplifier with negative feed- amplifier
back
Summer basics:
The behavior of an (ideal) summer is described by
Xn
eo = − ai ei
i=1
with the weights ai being typically 1 or 10. It yields the negative sum of the
weighted voltages applied to its inputs. Typical summers have about six inputs,
three of which have input weight 1, while the remaining three inputs are weighted
by 10.
16 Since no explicit weight is denoted next to the input, its weight is equal to 1.
2.3 Integrators 17
e3
e1 eo
e2 10
at the output of the left summer. This output is now connected to an input of
the right summer, weighted with 10. Another input of this summer is fed with e3
finally yielding eo = 5(e1 + e2 ) − e3 .
Figure 2.7 shows the front panel of an Analog Paradigm SUM8 This module
contains eight summers, each featuring five inputs, three of which have weight 1
and two have weight 10. The four summers in the top row have a special feature
which allows the built-in feedback resistor path to be opened by patching a con-
nection between the two jacks labeled FB and ⊥ (this symbol denotes ground),
thus turning a summer into an open amplifier.17 The summing junction SJ is also
available on all eight summers.
2.3 Integrators
with some constant e(0) called the initial condition. In particular, this means that
partial differential equations, i. e., differential equations in which derivatives with
respect to more than one variable occur cannot be handled directly on an analog
computer without explicit discretization or other techniques!
17 On these four summers, the feedback resistor Rf is split into a series connection of two
resistors of half the size of Rf . The connection between these two resistors is connected to
the FB jack. Grounding FB prohibits the current flowing through this path from reaching the
summing junction of the operational amplifier, thus effectively disabling the feedback path.
18 2 Computing elements
Initial condition: In this mode, often just called IC for short, the integrator is
reset so that its output takes on the negative of the initial condition e(0) that is
applied as a voltage to a special input jack of the integrator, which is typically
denoted by IC. This mode is normally the first step of each computation run
on an analog computer. It is important to note that due to time required to
charge the capacitor there is a minimum time required by an integrator to
take on that initial value.18
Operate: The actual computation is done in operate mode, OP for short. Here
the integrators integrate with respect to time over the negative sum of their
inputs as shown in (2.7).
Halt: In some cases it is useful to halt the analog computer for a short period of
time (typically of the order of some seconds) to read out values, etc. This is
done by switching all of its integrators into HALT-mode, HALT for short. In
this mode the outputs of all affected integrators are held constant – at least
18 This minimum time depends on the particular setup of the integrator with respect to its
time scale factor (see below) and the actual analog computer being used.
2.3 Integrators 19
1
HALT
0.5
Output level
e(0)
0 OP
e1
−0.5
e2 eo
IC
−1 e3
0 1 2
Time t
Fig. 2.8. Behavior of a typical integrator Fig. 2.9. Graphical representation of an integrator
Figure 2.8 shows the behavior of a typical integrator during these three modes
of operation. Here, the IC-input is connected to +1 (+10 V for example) while
one of the integrator inputs weighted with 1 is connected to −1. During IC-mode
the output is reset to the negative initial condition, −1. When the OP-mode
is activated the actual integration operation with respect to time starts. With
this particular setup the integration takes place over a constant value of −1.
Accordingly, the output yields a voltage ramp with a slope of 1. After two seconds
the analog computer is switched to HALT-mode and the integrator output stays
at its last value, +1.
The slope depicted in this figure needs some explanations as it is deeply con-
nected to time scaling as mentioned before: A typical integrator features several
time scale factors k0 that can be selected by either setting a switch or by placing
jumpers between certain plugs on the patch panel. A time scale factor k0 = 1
results in a slope of 1 when integrating over a constant value −1, i. e., with an
initial condition of e(0) = 0 the integration over −1 will yield an output value
of +1 after one second in OP-mode. Accordingly, the time scale factor k0 = 10
results in a slope of 10 and so forth. Typical analog computers feature integrators
with time scale factors k0 = 10n with n ∈ N typically ranging from 0 to 3 although
some large historic systems, such as the EAI 680,19 feature time scale factors up
to 105 .
with ai = C/Ri under the assumption of sufficiently large open loop gain A of the
operational amplifier. In fact, a high open-loop gain and small input currents are
even more important for a practical integrator than for a summer to keep its error
as low as possible, since errors due to the finite gain and to the non-zero amplifier
20 For more information about these control inputs see section 5.15.
2.3 Integrators 21
IC
RIC RIC
C C
SJ
Ri R1
e1 eo
ei R2
eo e2
..
. Rn
en
Fig. 2.11. Basic schematic of an integrator Fig. 2.12. Detailed schematic of an integrator
with full mode control
input current naturally accumulate in a circuit like this. As a rule of thumb the
condition
(1 + A)RC ≫ k0
It is clearly important for all control switches used by the integrators of an ana-
log computer to switch in perfect synchronism and with no contact bounce at
all. Electromechanical relays are only advisable in small, cheap, low-speed analog
computers and even there the variation of switching times between different relays
can be quite detrimental. Today, electronic switches based on field effect transis-
tors, FET s for short, are normally used.
Integrator:
The behavior of an integrator isdescribed by
Zt Xn
eo = − ai ei dτ + e(0)
0 i=1
with the weights ai being typically 1 or 10 times the time scale factor. The in-
tegrator is reset to its initial condition e(0) by activating IC-mode. The actual
integration with respect to time takes place during OP-mode. An integration can
be temporarily halted by switching the computer to HALT.
at the output of the leftmost integrator which is fed into the second integrator
finally yielding
t
Z
y3 = − 2 (1 − τ )dτ − 1 = 1 − 2t + t2 = (t − 1)2 .
0
−1 −1
y2
y3
-
+1 y3 t−1
y1
x
y2
Fig. 2.13. Generating a parabola by double inte- Fig. 2.14. Parabola generated by integrating
gration over a constant twice over a constant function
Figure 2.14 shows an oscilloscope screenshot with the analog computer running
in repetitive mode containing all three functions y1 , y2 , and y3 .22
Using the summing junction input of summers and integrators, free elements such
as resistors, diodes, and capacitors may be employed to extend the input capabili-
ties of these computing elements. In addition to that they can be used to establish
specialized feedback paths in order to setup computer circuits yielding functions
such as absolute value, hysteresis, signum function, etc., as will be shown later.
Figure 2.15 shows the front panel of three typical Analog Paradigm free el-
ement modules. The module on the far left is called XIR as it contains two free
resistor networks which can be connected to the summing junction of any summer
or integrator in order to extend the number of available inputs. In addition to the
standard input with weights of 1 and 10, this module also features input resistors
with weights 0.1 and 100.
The module in the middle, XID, contains six Schottky diodes as well as two
10 V Zener-diodes, sometimes called Z-diodes, which are often used to implement
limiters, etc.
22 The actual display is, of course, bright on a dark background which has been postprocessed
for printing here.
2.5 Potentiometers 25
Fig. 2.15. Typical free elements such as resistors, Fig. 2.16. Basic circuits used as coefficient po-
diodes, and BNC adapters tentiometers
The module on the right, XIBNC, holds four BNC jacks which can be used
to connect peripheral equipment such as oscilloscopes, function generators, etc.,
to the analog computer system.
2.5 Potentiometers
0.8
0.6
Rload = ∞
0.4
Rload = R
0.2 R
Rload = 10
0
0 0.2 0.4 0.6 0.8 1
Fig. 2.17. Voltage divider with loads Rload = ∞, Rload = R and Rload = R/10
The output is connected to the connection between both resistors. The propor-
tion between the two resistors is determined by the angular position α of the
potentiometer dial:
Ru = (1 − α)R
Rℓ = αR
Obviously, α = 0 yields no output signal at all since the output connection is then
effectively tied to ground.
Since the overall resistance of this setup is Ru + Rℓ = R, the output voltage
eo will be
eo Rℓ
=
ei R
in the unloaded case for an input voltage ei . In this context unloaded means that
no current flows out of the voltage divider, thus the same current flows through
both resistors. In this unusual yet idealized case the coefficient can be directly
set to up to three decimal places by means of a precision ten-turn potentiometer
fitted with a multi-turn dial.
Since most computing elements typically have a finite input resistance between
104 Ω and 106 Ω, the output of such a voltage divider is loaded resulting in a non-
negligible setup error as shown in figure 2.17. The linear solid line shows the
behavior of an unloaded potentiometer with respect to α, while the dashed and
dotted lines show the effect of external loading with R and R/10 respectively.
A loaded voltage divider is described by
Rℓ Rload
Rℓ ∥ Rload =
Rℓ + Rload
2.5 Potentiometers 27
with ∥ denoting a parallel circuit. Accordingly, the overall resistance of this setup
is
Rℓ Rload
Rtotal = Ru + Rℓ ∥ Rload = Ru +
Rℓ + Rload
yielding the following expression for the output voltage of a voltage divider loaded
by Rload for a given setting α:
Rℓ Rload
eo Rℓ ∥ Rload Rℓ + Rload Rℓ Rload
= = =
ei Rtotal Rℓ Rload Ru (Rℓ + Rload ) + Rℓ Rload
Ru +
Rℓ + Rload
Rℓ αR α
= = = .
Ru Rℓ αR R
+ Ru + Rℓ (1 − α)R +R (1 − α)α +1
Rload Rload Rload
This makes potentiometer setting a rather tedious process on most analog
computers as the value shown on the precision dial typically deviates quite sub-
stantially from the actual value with the deviation depending on the actual com-
puter setup, i. e., the load connected to the slider of the potentiometer.
Therefore, most analog computers feature an additional mode of operation
called potentiometer setting, POTSET for short, in which the integrators are
switched to IC- or HALT-mode and all input resistor networks are disconnected
from the inverting inputs of their associated operational amplifiers and connected
to ground instead. Thus, all inputs show the correct load resistance to their feeding
computing elements without any computation being performed.
In this mode the input of a potentiometer selected to be setup is automatically
connected to +1. Its output value – with its correct load – can then be read out
by means of a high-impedance digital voltmeter, or DVM for short. Instead of
a DVM a compensation voltmeter is often used by many early or small analog
computers. Here, the output voltage of the potentiometer to be setup is compared
against a precision voltage source. The difference between these two values is
then fed to a null-detector. With both voltages being equal, no current flows
between the potentiometer being setup and the reference source, thus no additional
error is introduced. Nevertheless, this method is quite time-consuming as one has
to setup the precision voltage source first and then adjust the actual coefficient
potentiometer until the null-detector shows that the adjustments match.
With the advent of affordable operational amplifiers it became possible to
employ a impedance converter in order to unload the potentiometer, as shown in
figure 2.18. The only load at the potentiometer slider is then due to the negligible
current i+ . The potentiometer is said to be buffered in this case.
28 2 Computing elements
The clever circuit extending this idea shown in figure 2.19 makes it possible
to set a coefficient −1 ≤ a ≤ +1 instead of restricting its value to the interval [0, 1]
as in a simple voltage divider. The behavior of this circuit is described by
eo = e1 (2α − 1)
with 0 ≤ α ≤ 1.
Figure 2.20 shows an example of the application of a coefficient potentiometer
in an analog computer program for computing the integral
Zt Zt
eo = − 5ei dτ = −5 ei dτ.
0 0
It should be noted that the classic graphical symbol for a potentiometer does
not distinguish between its input and output, since this will always be clear from
the surrounding circuitry.23 Some classic analog computer programs rely on the
fact that the potentiometers used are unbuffered. Such programs may have to be
adapted when being ported to a machine featuring only buffered potentiometers.
Figure 2.21 shows a typical PT8 module from an Analog Paradigm analog
computer containing eight buffered potentiometers, the last of which can be used
as a free potentiometer. While the first seven potentiometers each feature one
input and two paralleled outputs, this eighth potentiometer has two input jacks
labeled INa and INb. If it is to be used as a standard coefficient potentiometer, a
connection between INb and ⊥ must be patched. All of these potentiometers are
of the ten-turn type and are fitted with precision dials, allowing them to be set
up to a maximum of three decimal places. Due to their associated buffers, the dial
readings are accurate.
1
e1 2
10 eo
Fig. 2.19. Extended coefficient circuit allowing Fig. 2.20. Using a coefficient potentiometer
for factors −1 ≤ a ≤ 1
Potentiometers:
Potentiometers always have a distinct input and output, which must not be
reversed although their graphical symbol in computer programs won’t distinguish
explicitly between input and output. In classic analog computers the sliders of the
precision ten-turn potentiometers are typically protected by either incandescent
light bulbs or by microfuses. Be careful not to blow these microfuses as these are
hard to get nowadays. More recent implementations usually employ an operational
amplifier as impedance converter to unload the potentiometer’s slider and protect
it from patching errors. Typically potentiometers can only be used to set coeffi-
30 2 Computing elements
ei
−1
eo
ei f (ei ) eo
Fig. 2.22. Graphical represen- Fig. 2.23. Simplified schematic of a function generator with biased
tation of a function generator diodes
a parabolic behavior for values in a small interval about the break-point; this is
normally not a problem. In addition to this, the kinks of a polygonal line are
sometimes undesirable. These can be smoothed out by superimposing a small
high-frequency sinusoidal signal on ei , if required.
Figure 2.24 shows a typical example of such a function generator (of course
having many more diodes than shown in the schematic of figure 2.23) made in
the 1960s by EAI. It features ten diode segments, each with adjustable break-
point and slope. The potentiometers on the left side control the break-points,
while those on the right set the slope of each segment. The rotary switches in the
middle control the basic direction of the slope (positive, negative, or off). Such an
arrangement has the advantage that the break-points can be located according to
the “complexity” of the function being approximated.
Another widespread implementation variant of such diode function generators
features equally spaced fixed break-points. While EAI always used variable break-
points, the German company Telefunken mainly used function generators with
fixed break-points arguing that this makes their setup simpler since only the slope
potentiometers have to be set. Giving up the freedom of choosing break-points
freely is, indeed, not too much of a loss as most functions are relatively well-
behaved, not requiring excessive slopes for a polygonal approximation. The device
shown in figure 2.25 contains four distinct function generators, each having 21
slope potentiometers.
Some functions are used frequently enough to justify the construction of fixed
function generators. Among these functions are sin(), cos(), tan(), exp(), and log().
Figure 2.26 shows a module from a Telefunken analog computer of the 1960s
featuring eight such special functions.
Setting up such diode function generators is quite time-consuming and some-
times better approximations or even functions of more than one variable, which
32 2 Computing elements
Fig. 2.24. Diode function Fig. 2.26. Example of a classic fixed function generator (Tele-
generator with adjustable funken)
break-points (EAI)
cannot be realized with reasonable effort by purely analog electronic means, are
required. In cases like these, a digital computer can be coupled to the analog
computer by means of analog-digital- and digital-analog-converters.25 The digital
computer may then perform simple and fast table-lookups based on the values
read from its ADC(s) and output the corresponding function values by means of
a DAC. This, in fact, is a first step towards a hybrid computer.
Function generators:
Most function generators are based on using biased diodes to approximate func-
tions by polygonal lines. If possible, it is advisable to create special functions by
other means such as Taylor approximation, etc. Function generators are some-
thing of a last resort due to their time-consuming setup and the errors introduced
by approximating typically smooth functions by polygons.
e0 log(. . . )
log−1 (. . . ) eo
e1 log(. . . )
Fig. 2.27. Graphical symbols for multipliers Fig. 2.28. Multiplication by adding logarithms
and taking the antilog
2.7 Multiplication
−1
+1 eo
Fig. 2.29. Multiplier module MLT8 Fig. 2.30. Generating a parabola with an inte-
grator and a multiplier
This requires only two function generators instead of the three required for the
logarithm-technique described above. Likewise, the arguments are no longer re-
stricted to a positive interval. Due to its use of two squaring functions in conjunc-
tion with the factor of 1/4, this type of multiplier became known as quarter square
multiplier. Its implementation typically required positive and negative versions of
both of its arguments e0 and e1 . Often these values were readily available in the
surrounding computer setup. If not, dedicated inverters (summers with only one
input) were used.
Multipliers allowing arguments −1 ≤ e0 ≤ +1 and −1 ≤ e1 ≤ +1 are generally
called four quadrant multipliers in contrast to two quadrant multipliers, which only
yield results in two of the four quadrants of a Cartesian coordinate system.
Figure 2.29 shows a modern Analog Paradigm MLT8 multiplier module con-
taining eight individual four quadrant Gilbert-cell multipliers.
A simple application is shown in Figure 2.30 where a multiplier in conjunction
with an integrator is used to generate a parabola. The integrator yields a linear
voltage ramp running from +1 to −1 which is fed into both inputs of a multiplier,
which in turn yields a parabola function at its output.
2.8 Comparators and switches 35
Multiplication, etc.:
Many traditional analog computers feature quarter square multipliers which need
both input variables with positive and negative signs. By reversing the two inputs
for either x or y a sign-reversal of the result can be achieved without the need for
an additional inverter.
A comparator is used to compare two analog signals e0 and e1 and to yield a logic
output signal for the condition e0 + e1 > 0.27 Typically, a relay or preferably an
electronic switch is controlled by this logic output signal. This makes it possible
to implement switching functions in an analog computer setup.
Figures 2.31 and 2.32 show the graphical symbols used to denote a comparator-
switch combination as well as a pure comparator. In contrast to a relay switch,
which has no dedicated inputs and outputs, an electronic switch may not be
reversed with respect to its input/output connections.
Figure 2.33 shows a simplified schematic of a typical comparator driving a
relay. An operational amplifier is fed with the two analog inputs to be compared
via two resistors R of equal value. Its feedback path consists of two biased diodes
which start conducting at certain output voltages of the operational amplifier as
determined by the two bias sources, which are depicted here as batteries. The
purpose of these two diodes is to limit the output voltage in order to avoid driving
the output stage of the amplifier into saturation. Classic implementations often
controlled high-speed polarized telegraph relays with this circuit.
Figure 2.34 shows an Analog Paradigm CMP4 module with four comparators
with four associated electronic switches. The left half of each row belongs to the
comparators with their two inputs and the output while the right half is associated
with the electronic SPDT -switches. The output of each comparator as well as the
control input of each associated electronic switch are available at the front panel.
If the input jack of an electronic switch is left unconnected, it is automatically
connected to its associated comparator’s output. If the output of a comparator is
explicitly patched to the input of an electronic switch, it will take over control of
this switch. This makes it possible to control more than one switch with a single
comparator output without requiring additional connections for normal operation.
27 It should be noted that a typical comparator does not perform a comparison like e0 > e1 .
36 2 Computing elements
R
e1
e2
R
Fig. 2.31. Graphical Fig. 2.32. Symbol for Fig. 2.33. Basic circuit of a comparator with
symbol representing a a comparator yielding relay
comparator combined a logic output signal
with a switch
Fig. 2.35. Example of a pen-plotter output Fig. 2.36. Typical oscilloscope screenshot
Analog computer operation
3
Although, at first sight, using an analog computer to solve a problem may appear
to be a daunting prospect, in practice it is a relatively straightforward process.
Figures 3.1 and 3.2 show two classic analog computers – an EAI-580 and a Tele-
funken RA 770 precision analog computer.
The most obvious feature of both machines is the large, central patch panel,
which contains several thousand jack sockets; these are typically shielded on high
precision analog computers. Whilst most manufacturers developed their own cable
and connector system, small table-top analog computer use standard 4 mm or
2 mm banana plugs for interconnecting their computing elements. The analog
computer program is set up by linking the jacks on the patch panel with the patch
leads to create a circuit which is an analogue for the problem being investigated.
Figure 3.3 shows a patch panel being removed from such a classic machine.
Programs were typically pre-patched and the patch panels stored in a cabinet
when not in use. As archaic as this seems from today’s perspective, it is a viable
means of storing and “loading” analog programs.
A more detailed look at the EAI-580 in figure 3.1 shows that its left half
is divided into four subunits. At the bottom is the main control panel allowing
the operator to select the desired mode of operation (IC, OP, HALT, repetitive
operation). It also contains the push-button controls for setting up the servo-set
potentiometers, which were normally fitted in medium to large analog computers.
These replaced most of the manual precision potentiometers and consist of a po-
tentiometer mechanically coupled to a small motor that is in turn controlled by
a simple servo-loop. Using such an arrangement it is possible to set up these po-
tentiometers by selecting a potentiometer address and entering the desired value
on this control panel. Without servo-set potentiometers the number of manual
potentiometers and variable function generators, which had to be set up, made
program changes time-consuming and cumbersome.
40 3 Analog computer operation
Fig. 3.3. Removing a patch panel from an EAI 231RV analog computer (cf. [EAI PACE 231R,
p. 6])
The leftmost panel in the middle section contains the central overload in-
dicator panel, which features one incandescent light bulb for every operational
amplifier in the system, as well as a four-digit digital voltmeter and a traditional
moving coil instrument. The rightmost panel contains ten manual potentiometers
and a number of switches and display lights, which show the respective states of
the computer’s comparators and relays.
On top is a digital expansion unit that consists of a number of logic gates,
flip-flops, and delay elements, which can be interconnected by means of a small
dedicated removable patch panel. Using these digital elements, it is possible to
extend the simple repetitive mode of operation by controlling the modes of indi-
vidual integrators, etc. This facility is useful for solving complex problems such as
process optimization.
The Telefunken RA 770 computer shown in figure 3.2 contains basically the
same elements. The operator controls and an oscilloscope are located in the middle
section of the computer. On the left side is a large digital expansion group, similar
to that of the EAI-580 system. The expansion chassis on the bottom left contains
variable and fixed function generators.
An example of a modern analog computer, the Analog Paradigm Model-1,
is shown in figure 3.4. Compared with classic machines it is tiny and does not
have a central patch panel due to the fact that the overall system is fully modular
and thus can be equipped with the modules that any given problem requires. The
42 3 Analog computer operation
Fig. 3.4. Analog Paradigm Model-1 analog computer (picture by James Ball)
system shown is the most basic configuration consisting of eight summers, four
integrators with four time scale factors each, eight multipliers, four comparators
with electronic switches, eight manual precision potentiometers, a power supply,
and a control unit. Since modern analog computers are not normally shared be-
tween different users, the absence of a central patch panel is not a significant
drawback and is more than compensated for by the inherent flexibility of the
modular system design.
The control unit, CU for short, of this system is shown in more detail in figure
3.5. It controls the operation of the integrators in the computer either in manual
or automatic mode. In manual mode the computer can be switched into the modes
IC, OP, and HALT by means of the push-button INITIAL and the toggle-button
OP/HALT.
Automatic mode allows either single or repetitive operation of the analog
computer by activating the SINGLE or REPEAT button respectively. In both of these
cases, OP-time is set in 10 ms steps by the push-button precision potentiometer
on the top of the CU. The push-button ICTIME selects one of two preset IC-times,
suitable for integrators operating with a time constant of 1 or 10 and 102 or 103
respectively. Pushing OVLOAD will cause the computer to automatically enter its
HALT-state when an overload is detected in any computing element during an
OP-cycle.
The jacks on the bottom of the control unit provide trigger signals that can
be used in conjunction with external devices such as oscilloscopes or plotters.
3 Analog computer operation 43
The jack labeled EXTHALT can be used to halt the current OP-cycle by applying
an external signal, typically the output of a comparator or a signal derived from
external equipment.
An even more recent analog computer is THE ANALOG THING 28 (THAT )
which was introduced as an open hardware project29 in late 2021 and is shown in
figure 3.6. This system cannot compete with large classic analog computers with
respect to precision or its sheer number of computing elements, but it is powerful
enough to implement really interesting analog computer programs.
It features eight coefficient potentiometers, five integrators30 , four summers,
four inverters, two multipliers, two comparators, two sets of free resistor networks,
which can be used to extend the number of inputs of the integrators, summers,
and inverters, as well as several free capacitors and diodes. When larger problems
are to be tackled, several THATs can be cascaded in a master/minion mode of
operation by means of connectors on the back.
attached digital computer, while MASTER OUT connects to MINION IN of the next
system in a master/minion setup.
Operation of THE ANALOG THING is straightforward. The switch on the
lower right controls the mode of operation. When set to MINION the mode of
operation of this THAT is controlled by the master in a chain, OFF turns the device
off. Coefficients are set in COEFF mode. Since no precision 10-turn potentiometers
are used in this little analog computer, the coefficient to be set is selected by the
switch labeled COEFFICIENT. Its corresponding value is then displayed on the panel
voltmeter.
IC sets the machine to initial condition, while OP and HALT select operating
and halt mode respectively. The two switch positions labeled REP and REPF allow
repetitive operation with the operation time set by the OP-TIME potentiometer.
REP allows for operation times of up to several seconds, while REPF is used for
fast repetition times. In both cases, the panel voltmeter gives an indication of the
selected operational time.
Basic programming
4
Although solving a problem on an analog computer has basically the same pre-
requisites as programming a digital computer, the actual process of programming
differs significantly from the now-predominant algorithmic approach. Figure 4.1
shows the main steps involved. First, the problem under investigation has to be
examined and a mathematical representation, typically a system of coupled dif-
ferential equations, DEQs for short, has to be derived. From here on the digital
and analog ways of programming quickly diverge. On a digital computer these
equations form the basis for an algorithm which will be executed in a stepwise
fashion. When programming an analog computer the very same equations will be
transformed into a patching setup which determines how the various computing
elements are connected to each other. This setup is called an analog computer
program.
Since differential equations are of utmost importance in analog computer pro-
gramming, a few words about these mathematical objects might be appropriate.
Basically, a differential equation is an equation in which the unknown is not a
simple value but rather a function. A differential equation is made up from this
unknown function as well as various derivatives of this function. To simplify things
a bit32 it will be assumed that the function and all of its required derivatives are
with respect to a single independent variable. This independent variable is often
32 Differential equations are by no means really simple – many of them, often those of great
relevance for practical applications, have no closed form solution at all and are thus only
accessible to numerical or – in our case preferably – analog simulation techniques.
48 4 Basic programming
Fig. 4.1. How to program an analog computer (cf. [Truitt et al. 1960, p. 1-108])
4.1 Radioactive decay 49
time. Differential equations of this type are commonly called ordinary differential
equations or ODEs and are of the general form
dy d2 y dn y
f t, y, , ,..., n = 0 (4.1)
dt dt2 dt
where y denotes the unknown function, not a “simple” variable! If the independent
variable is time t, y depends on t and thus should be always thought of as y(t),
although this is almost never written explicitly to avoid unnecessary clutter in the
equations. This time dependency makes differential equations very interesting and
powerful, as they can be used to describe dynamic systems.
The highest derivative of y determines the order of the ODE. As is customary,
derivatives with respect to time will be denoted by simply placing a dot over the
function as shown in the following:
dy d2 y ... d3 y
ẏ = , ÿ = 2 , y = 3 , . . .
dt dt dt
Typically, physical systems with y representing some position of an object require
...
derivatives ẏ (velocity), ÿ (acceleration), and sometimes y (jerk or jolt).
Differential equations can also contain functions and derivatives thereof with
more than one independent variable. A typical example for this is the well known
one-dimensional wave equation
∂2u ∂2u
2
= c2p 2 ,
∂t ∂x
which describes how a wave travels through a medium. Differential equations of
this type are called partial differential equations (PDEs) and are normally much
more complicated to solve, even by using an analog computer, than ODEs.
will typically be on large numbers of atoms of any given isotope. With the dot
notation as introduced before, this can be written as33
Ṅ ∝ N,
N (0) = e0 ec = ec ,
finally yielding
N (t) = N (0)e−λt , (4.3)
the universal law of radioactive decay 37 with λ denoting the decay rate.
Typically, the half-life T1/2 is found in isotope tables instead of λ. T1/2 denotes
the time in which half of the atoms available at t = 0 have decayed. It is linked to
λ by
ln(2)
T1/2 = ,
λ
yielding the following form for the exponential decay that is typically found in
textbooks:
− ln(2) t
N (t) = N (0)e T1/2 .
37 Another derivation, lacking the mathematical rigor, can be found in standard texts such
as [Schwankner 1980, p. 111 f.].
38 Cf. [Schwarz 1971, p. 168 et seq.] for more information.
52 4 Basic programming
−1
N (0)
N
Ṅ −N
λ
−Ṅ N
λ
Fig. 4.2. Initial computer setup for the decay Fig. 4.3. Final computer setup for the decay
problem problem
In the case of radioactive decay the DEQ already has its highest derivative on
the left-hand side:
Ṅ = −λN. (4.4)
39 The implicit change of sign caused by integrators and summers must always be accounted
for.
40 All output jacks of these computing elements are connected to each other.
4.1 Radioactive decay 53
Fig. 4.4. Setup for the simulation of radioac- Fig. 4.5. Simulation of radioactive decay
tive decay
In order to obtain deeper insight into the behavior of a dynamic system like
this, the computer can be run in repetitive mode. If the OP-time is short enough,
a (nearly) flicker-free picture of solution curves can be obtained on an oscilloscope
screen. The effects of changing the settings of the coefficient potentiometers will
immediately show on the display. This mode of operation typically requires a
simple linear ramp τ for the x-deflection of the oscilloscope.41 This can be easily
generated by a single integrator, as shown in figure 4.6. The setting of β controls
the position at which the oscilloscope beam will start, while α controls the speed
at which the ramp increases, thus controlling the width of the resulting picture.
Figure 4.7 shows a long-term exposure screenshot with the analog computer
running in repetitive mode using a linear voltage ramp for x-deflection, while the
parameter λ was varied manually during the exposure. The exponential character
of the decay function can be seen clearly.42
While a high time scale factor k0 of the integrators is required for an oscil-
loscope output, k0 must be much lower if a plotter is used as output device in
order to not exceed the maximum writing speed of the plotter. In this case single
run operation would be used to obtain a single graph of the solution function per
computer run.
41 It would also be possible to use the built-in sweep generator of a standard oscilloscope,
but this would require triggering the oscilloscope from the analog computer’s control unit in
order to obtain a stable picture.
42 The faint trace running from the lower right to the upper left is the result of the periodic
reset of the integrators. Ideally the oscilloscope’s beam should be switched off by its z-input
under control of the control unit.
54 4 Basic programming
+1
β
−1 α τ
Fig. 4.6. Generation of a sweep signal Fig. 4.7. Simulation of radioactive decay in
repetitive mode of operation with varying pa-
rameter λ
4.1.3 Scaling
The solutions obtained in the previous section were only qualitative – the settings
of λ and k0 were chosen arbitrarily to obtain a feeling for the behavior of the expo-
nential decay. To get quantitative solutions the problem must be scaled properly.
Scaling is a central part of any analog computer programming and is often the
most difficult aspect.
As a rule of thumb, all variables within a computation should span as much
of the permissible range of values [−1, 1] as possible in order to keep the effects
of unavoidable errors in the computing elements as small as possible. Therefore,
N (0) might be scaled so that it corresponds to −1.43 If, for example, 0.12 mols44
of 234
90 Th are given, this would be represented by a setting of N (0) = 1 in the
computer setup shown in figure 4.3. If the output N of the inverter reads 0.4 after
some time, this would correspond to 0.4 · 0.12 = 0.048 mols.
So, it is necessary to distinguish between problem variables, which are the vari-
ables appearing in the mathematical representation of the problem, and machine
variables, which are the corresponding scaled variables as they occur in a given
scaled computer setup. Machine variable are often denoted by a hat:
b = αN with α = 1
N ,
max(N )
43 It does not matter what the actual voltage of the machine unit is – be it 5, 10, 50, 100 V
or something completely different. As long as everything is scaled to ±1 the actual machine
unit only has to be taken into account during readout if a standard DVM or other instrument
is used.
44 One mol of a substance contains about 6.022 · 1023 atoms or molecules (Avogadro’s
constant) and weighs as many grams as the atomic/molecular weight of that substance is. So
one mol of 12
6 C weighs 12 grams.
4.1 Radioactive decay 55
where α is the scale factor for this variable. In the example above it is
1
α= ≈ 8.333.
0.12
This amplitude scaling of variables is only one side of the coin. It is also nec-
essary to distinguish between problem time, i. e., the time in which the actual
problem runs, and machine time, the time at which the current simulation runs.
The process of transforming problem time to machine time is called time scaling.
To distinguish between both times the symbols t and τ are commonly used, repre-
senting problem and machine time; these are coupled by a scale factor β yielding
τ = βt. β is determined as the product of k0 of the integrators in a given setup
and their respective input weights.
Machine time is running faster than problem time if β > 1. This is suitable
for simulating processes which change too slowly with time for direct observation.
Radioactive decay of some long-lived isotope is a typical example of such a process.
In other cases where the problem time is too short to make useful observations,
choosing β < 1 suitably will slow down machine time accordingly.
In the case of 234
90 Th having a half-life of T1/2 = 24.1 days, a substantial
speedup of machine time as compared to problem time is desirable. To get a
flicker-free display on an oscilloscope, machine time will be scaled so that 2.41 ms
will correspond to the half-life T1/2 = 24.1 d with 0 ≤ τ ≤ 20 ms. This results in
ln(2)
λ= ≈ 0.287,
2.41
which will be set using the coefficient potentiometer labeled λ in figure 4.3. Setting
the time scale factor of the integrator to k0 = 103 will scale everything from
seconds to milliseconds, yielding the desired time-compression.
Scaling:
Scaling often isn’t as easy as in the example above. The following approach
should be generally used:
1. Determine the interval for each variable in a computer setup. This can be done
by either
– guestimating (possible only if the underlying differential equations are not
too complicated) or
– by solving the problem numerically, or
– experimentally, by running the analog computer and observing the vari-
ables on an oscilloscope, etc. If a variable causes an overload, it is a good
idea to scale it down by a constant factor such as 2. If a variable is con-
tained in a small subinterval of [−1, 1], it should be scaled up accordingly
to minimize errors due to the implementation of the computing elements.
56 4 Basic programming
There also exists software, such as the DEQscaler,45 to perform problem scal-
ing for analog computers. This package basically solves the equations in question
numerically, determines maximum values and then derives suitable scale factors
for an analog computer implementation.
The next example is a bit more complicated and is based on the following second
order differential equation
ÿ + ω 2 y = 0. (4.5)
ÿ = −ω 2 y.
This differential equation is still simple enough that analytic solutions can be
“guessed”. Functions y for which the second derivative is equal to the negative of
the function with some weight ω 2 are required. One such function is, of course,
− cos(τ ) sin(τ )
− sin(τ )
the constant 0-function, which is quite trivial. Two more interesting solutions are
based on sine and cosine as
d2 sin(ωt) d2 cos(ωt)
= −ω 2
sin(ωt) and = −ω 2 cos(ωt).
dt2 dt2
So just by inspection, three basic solutions of this differential equation have been
identified. These are called particular solutions, but which solution is the correct
one? This cannot be decided from just (4.5) as this equation does not contain any
information about the initial conditions. Since this DEQ is of order two there are
two initial conditions involved, one for y, called y(0), and one for ẏ, denoted by
ẏ(0).
To keep things simple assume that ω = 1. This results in the basic computer
setup shown in figure 4.8. The leftmost integrator has an output signal of −1 at the
start of the simulation run, t = 0. The second integrator implicitly has an initial
condition of 0, therefore, its output is 0 at t = 0 and will initially rise during the
simulation due to the input values delivered by the output of the first integrator.
With ω = 1, τ is controlled solely by k0 , which has to be set equal for both
integrators. Setting k0 = 103 results in an output signal with a period of 6.28 ms,
which is exactly what would be expected for ω = 1.
Using a different initial condition scheme such as 0 for the leftmost integrator
and −1 for the second integrator would have yielded cos(τ ) instead of sin(τ ). By
applying initial conditions ̸= 0 to both integrators, linear combinations of these
two particular solutions can be achieved.
Since this basic circuit yields ± cos(τ ) as well as ± sin(τ ) for suitable initial
conditions, it is commonly called a quadrature generator. It is the basis of many
more complex computer setups and is especially useful when figures based on
closed loops need to be displayed on an oscilloscope, as will be shown in later
sections.
Now, what about 0 < ω < 1? Wouldn’t a simple solution just require a sin-
gle coefficient potentiometer after the inverter in the setup shown in figure 4.8?
Unfortunately, this is not the case as ω effectively affects the time scaling of both
integrators! Just using one potentiometer following the inverter, thus preceding
the input of the leftmost integrator, would just change its time scale factor to
58 4 Basic programming
+1
− cos(ωτ ) sin(ωτ )
ω ω
− sin(ωτ )
Fig. 4.9. Basic computer setup for equation Fig. 4.10. Output of the second integrator in
(4.5) with variable ω the computer setup shown in figure 4.9 with
k0 = 103 and ω = 12
ωk0 while the second integrator would still run at k0 . Therefore, it is typically
necessary to change the time scale factor of both integrators at once, as shown in
figure 4.9.
Figure 4.10 shows the resulting output signal of the leftmost integrator for
ω = 12 . One x-division of the screen corresponds to 2 ms as before. So one period
of the displayed sine-signal is about 12.55 ms, which is a good approximation to
4π as would be expected for this particular value for ω.
The results shown so far have been generated with the computer set to repet-
itive operation and a OP-interval of 20 ms. In some cases a circuit like that shown
in figure 4.9 has to run for an extended period of time, for example to display a ball
(a scaled-down unit circle based on a sine/cosine signal pair) on an oscilloscope as
it bounces around in a box. In such cases a simple computer setup like this nor-
mally won’t suffice as the amplitude of the resulting sine/cosine signal pair will
not be constant over extended periods of time. This results from the unavoidable
imperfections of real computing elements. Depending on the computer used the
amplitude may decrease or increase over time, either collapsing to zero or causing
an overload condition of the integrators and the inverter.
Therefore, some additional circuitry is required for amplitude stabilization. The
idea behind this is fairly simple. On one hand, to avoid a decreasing amplitude,
some positive feedback must be introduced. This then makes a limiter circuit
necessary to avoid the ever-increasing amplitude that would otherwise inevitably
result. These two measures will, of course, cause some, but normally negligible,
distortion of the resulting waveform, especially when the sine/cosine signals are
used to drive some sort of display.
Figure 4.11 shows a typical setup of a quadrature generator with amplitude
stabilization: At its heart is the circuit shown before, with the difference that
the central feedback loop uses integrator inputs weighted with 10, so that the
overall time scale factor is 10ωk0 . The potentiometer labeled α introduces some
positive feedback into the loop thus steadily increasing the amplitude of the output
signal. Depending on the analog computer used this positive feedback may be
unnecessary. If the amplitude decreases without it, it is sufficient to set α to a
very small value (less than 10−2 , typically). To avoid running into an overload
4.2 Harmonic functions 59
+1 −1
a a∗ α
ω 10
ω 10 sin(10ωτ )
− cos(10ωτ )
− sin(10ωτ )
some negative feedback is necessary to keep the overall amplitude stabilized. This
is implemented by the potentiometer a∗ , which is one of the rare cases where a free
potentiometer is required. The output (wiper) of that potentiometer is connected
to a diode that is in turn connected to an integrator input weighted with 1.
The idea here is to apply a negative bias to the diode so that it starts con-
ducting only when the output of the leftmost integrator is higher than this bias
voltage. Then the diode will start conducting, essentially implementing a negative
feedback loop (with negative exponential characteristics). It is important that the
weight of the input connected to the diode is at least a factor 10 smaller than the
input for the main feedback loop.
This setup works well on analog computers with buffered coefficient poten-
tiometers, such as the Analog Paradigm Model-1 used here. If a classic analog
computer without buffered potentiometers is used, the cathode of the diode con-
nected to the potentiometer a∗ should be connected to the summing junction of
the leftmost integrator.46
There is some interplay between the settings of the potentiometers labeled a
and a∗ . As a determines the initial amplitude of the output signal, a∗ should be
set so that the diode bias voltage is of the same size as this initial amplitude. If
a > a∗ , the amplitude stabilization circuit will kick in directly after setting the
computer to OP-mode. This will result in a quickly decreasing amplitude which
is normally undesirable.
46 An even simpler implementation saves the potentiometer a∗ and connects the output of
the leftmost integrator to its summing junction by means of two Zener-diodes in anti-series.
These diodes will start conducting when the amplitude reaches their combined threshold level,
thus effectively limiting the output amplitude of the integrator.
60 4 Basic programming
Fig. 4.12. Output of the second integrator in the computer setup shown in figure 4.11 with k0 = 103
and ω = 12 resulting in a time scale factor of 10k0 ω = 5000
+1
ω ω
R
a∗ Output
Figure 4.12 shows the output of this circuit with the analog computer set to
continuous operation with ω = 12 and k0 = 103 . The oscilloscope’s x-deflection is
set to 2 ms per division as before.
A variation of this circuit, which is shown in figure 4.13, has been devised
by Edward L. Korte47 . At first sight, this circuit looks quite similar to that
described above, but there are two subtle differences:
– The amplitude is stabilized by a negative linear feedback on the leftmost
integrator as controlled by the setting of the potentiometer labeled a∗ .
– It requires some positive feedback to keep the amplitude up. This is imple-
mented by the resistor R and two limiting diodes. The resistor should be of
the same order as the input resistors of the leftmost integrator. This results
in a small positive rectangular feedback signal which will keep the amplitude
from falling to zero.
−1
k0 = 103 y
k0 = 103
+1
0.5
−1 x
0.5
+1
4.3 Sweep
ÿ = −ω 2 y
The resulting output signal is shown in figure 4.15. It should be noted that the
amplitude could use some stabilization, as shown before, because the amplitude
visibly increases with time.48
Basically, the subcircuit in the upper half of figure 4.14 can be used to generate
sin(φ) and cos(φ) if φ̇ is given and fed to the two multipliers. This is actually a
very useful circuit since many problems require harmonic functions for a wide
range of the parameter φ and have φ̇ available elsewhere in the setup, as in the
following example.
This section describes the simulation of the movement of the tip of a mathematical
pendulum as shown in figure 4.16. It consists of a weightless rod which is pivoted
at point 0 around which it can swing without friction. Mounted at the other end of
the rod is a punctiform mass m. The angle α0 denotes the maximum displacement
of the pendulum, i. e., its initial condition, while α represents the current angle
with respect to the vertical running through the pivot located at 0.49
The mass is subjected to the force Fg = mg which is caused by the acceleration
due to gravity. This can be resolved into in two forces acting tangentially (Ft ) and
radially (Fr ) on the mass:
Ft = −mg sin(α)
Fr = mg cos(α)
48 The output signal should be taken from the integrator following the one with the amplitude
stabilization circuitry. This will yield a less distorted signal due to the roll-off effect of the
integration.
49 Since α changes with time, α(t) would be more accurate, but it would clutter the following
derivations. Furthermore, the time-dependency of this angle will show in the dot-notation used
for its derivatives.
4.4 Mathematical pendulum 63
Ft is the restoring force, which will drive the mass back to its position of rest. It
causes a tangential acceleration
Ft
at = = lα̈
m
with l denoting the length of the pendulum rod and α̈ being the angular acceler-
ation. This readily yields
mlα̈ = −mg sin(α)
and thus the following differential equation of second order describing the dynamic
behavior of the pendulum:
g
α̈ + sin(α) = 0 (4.6)
l
If it wasn’t for the term sin(α) in this equation, it could be easily solved by an
analog computer using only summers, integrators, and coefficient potentiometers.
This sine term makes things a little bit more complicated. If a sine function gen-
erator is available, it can be used to mechanize this equation. Otherwise one has
to resort to one of the following two approaches:
64 4 Basic programming
−α̇
α
g
l
α̇0
+1
+1
4.4.2 Variants
50 What sufficiently small means actually depends on the precision required for a solution
of this problem. Typically, −π/8 ≤ α ≤ π/8 can be considered small enough as the error will
stay below 1%.
4.5 Mass-spring-damper system 65
yielding the second term of (4.7). Changing the sign of this solution and adding φ
will then yield the required sine approximation, an approach that will be detailed
in section 5.4.
Apart from the amplitude stabilization scheme shown in section 4.2, the prob-
lems shown so far have exhibited oscillating behavior with no damping. The next
example introduces damping using the simple mass-spring-damper system shown
in figure 4.18. In this example y denotes the vertical position of the mass with
respect to its position of rest. Neglecting any gravitational acceleration acting on
the mass, there are three forces to be taken into account:
..........
.
H
H Fm + Fd + Fs = 0
..
.........
mÿ + dẏ + sy = 0
Since this is a closed physical system all forces add up to zero yielding the
following second order differential equation, which describes the dynamic behavior
of this mass-spring-damper system:
Here, m denotes the mass, d is the damper constant, and s the spring constant.
Without the damping force Fd this would just be an undamped harmonic oscilla-
tor, as in the previous examples.
51 The eigenfrequency, sometimes also called natural frequency, is the frequency at which a
system oscillates without any external forces acting on it.
4.5 Mass-spring-damper system 67
µ2 + 2βµ + ω02 = 0,
y = aeµ1 t + beµ2 t
= ae−(β+iω)t + be−(β−iω)t
= ae−βt eiωt + be−βt e−iωt
= e−βt aeiωt + be−iωt . (4.12)
The e−βt term is the damping term of this oscillating system while a + b and
a − b are determined by the initial conditions. If it is assumed that the mass
has been deflected by an angle α0 at t = 0, this will obviously be the maximum
amplitude of its movement. If the mass is then just released at t = 0 without any
initial velocity given to it, the following initial conditions hold:
α(0) = α0 and
α̇(0) = 0.
α(0) = (a + b) = α0 .
Differentiating (4.13) with respect to t and applying the same arguments yields
α̇(0) = (a − b)iω = 0
and thus a − b = 0 for this case. So a mass released from a deflected position at
t = 0 with no initial velocity given to it, is described by
y = e−βt α0 cos(ωt),
which is exactly what would have been expected from a practical point of view.
The position of the mass follows a simple harmonic function and its amplitude is
damped by an exponential term with negative exponent.
The term q
ω = ω02 − β 2 ,
describing the angular eigenfrequency, yields
2π
T =
ω
for the period and is quite interesting as three cases have to be distinguished with
respect to ω0 and β:
ω0 > β: Subcritical damping, (underdamped) the mass oscillates.
ω0 = β: Critical damping – the systems returns to its position of rest in an expo-
nential decay movement without any overshoot.
ω0 < β: In this case the system is said to be overdamped. It will return to its
position of rest without any overshoot but more slowly than with critical
damping.
+1 −1
−ẏ(0) y(0)
ÿ −ẏ y
s
1
m
Fig. 4.20. s = .2, d = .8 Fig. 4.21. s = .6, d = .8 Fig. 4.22. s = .8, d = .6 Fig. 4.23. s = .8, d = 1
Equation (4.8) is used as the basis to derive a computer setup for this mass-spring-
damper system. Rearranging yields
dẏ + sy
ÿ = − ,
m
which can be readily transformed into the computer setup shown in figure 4.19.
Instead of dividing by m it is much simpler to multiply by 1/m, although this
requires a little calculation before setting the corresponding potentiometer.
Using the two potentiometers labeled −ẏ(0) and y(0) the initial conditions
of the mass-spring-damper system can be set. y(0) controls the initial deflection
of the mass while −ẏ(0) sets its initial velocity at the start of a simulation run.
Figures 4.20 to 4.23 show the qualitative results of four simulation runs with the
mass set to 1 (i. e., omitting the potentiometer following the rightmost summer
altogether) and different settings for the spring and damper constants s and d. As
can be clearly seen a stiffer spring requires a stronger damper in order to bring the
oscillation down quickly. Furthermore, a stiffer spring results in a higher resonance
frequency of the oscillating system.
The setup shown in figure 4.19 has the advantage that all three parameters
can be set independently from each other. This comes at a cost, though, as two
summers, one acting as an inverter, are required. Taking into account that inte-
grators typically feature multiple inputs and that integrators and summers both
70 4 Basic programming
+1 −1
−ẏ(0) y(0)
perform an implicit sign inversion, the simplified computer setup shown in figure
4.24 saves one summer.
This, however, has a drawback. The parameters s and d can no longer be set
independently from m as
s
ν= and
m
d
µ= .
m
This does match nicely with (4.9) above.
4.5.3 RLC-circuit
The following example is based on the computer setup shown above in figure
4.24 with −ẏ = 0. The following component values will be assumed:
C = 1.2 µF
L = 5.1 mH
R = 20 Ω
E = 6.94 V
with E representing the voltage delivered by the battery in the circuit. A fully
charged capacitor at t = 0 yields the initial condition
which is also the maximum amplitude of the charge variable that can be exhibited
by the oscillating system.
This readily yields the following amplitude scaling factor
1
α= ≈ 120077 (As)−1 .
8.328 · 10−6 As
Sacrificing some of the machine unit range of [−1, 1]
b̈ = −3.92 · 103 Q
Q ḃ − 1.63 · 108 Q
b
Q(0)
b = 0.833.
54 As before, the hats denote (scaled) machine variables. Units will be left out from now on
to simplify notation.
72 4 Basic programming
With these scaled amplitudes the remaining step is time scaling. The (un-
damped) angular frequency55 of the basic RLC-circuit shown in figure 4.25 is
1
ω= √ ≈ 12783 s−1
LC
resulting in
ν ≈ 2 kHz
since ν = ω/2π. A time scaling factor of β = 104 is chosen which finally yields
d2 Q
b dQ
b
2
= −0.392 − 1.63Q.
b
dτ dτ
The coefficients 0.392 and 1.63 can now be readily set as µ and ν in the computer
setup shown in figure 4.24 with 1.63 represented by ν = 0.163 feeding an integrator
input weighted by 10.
This example concludes the introductory section on analog computer program-
ming. The following chapter presents a number of useful special-purpose circuits,
which also serve as additional programming examples.
55 Since the undamped angular frequency is always larger than the damped frequency it is a
good upper limit of what is to be expected from the circuit.
Special functions
5
As in classic digital computer programming, there are special functions that fre-
quently occur in analog computer setups, such as inverse functions (like (square)
roots), limits, hysteresis, etc. This chapter describes typical “library” functions
which can be used as building blocks for more complex programs. As in the al-
gorithmic domain, especially among Perl programmers, the saying “there is more
than one way to do it” holds true for analog computer programs. The functions
described here are typically neither the only possible implementations nor neces-
sarily the best.
In case of doubt or when searching for other functions, the vast classic
literature should be consulted. The definitive books [Korn et al. 1964] and
[Giloi et al. 1963] (German) as well as [Hausner 1971], [Gilliland 1967],
[Ammon 1966], and [Carlson et al. 1967] are highly recommended for further
reading.
Integration on an analog computer only works with time as the variable of inte-
gration. In some cases it is necessary to compute a more general integral of the
form
Zx1
f (x) dx,
x0
74 5 Special functions
ZT
dx
f (t) dt
dt
0
with x = x(t), x(0) = x0 , and x(T ) = x1 .56 This can be easily implemented by
an integrator and a multiplier as shown in figure 5.1. It should be noted that the
multiplier should have a minimum zero point error since the integration amplifies
any such error over time linearly, depending in the time scaling factor k0 .
A very powerful technique for function generation is that of inverse functions, for
example obtaining a square root based on the function f (x) = x2 . Recalling the
basic setup of summers and integrators, the idea of a setup yielding an inverse
function is simple and relies on the fact that the operational amplifier tries to keep
its inverting input, SJ, at zero. Figure 5.2 shows the basic setup to generate an
inverse function.
At the heart of this circuit is a function generator that forms the feedback
element of an open amplifier. With an input signal of x the summing junction will
be at zero if −x = f (−y) with y = f −1 (x) being the value at the output of the
amplifier.
If the inverse function is not defined over the full range of [−1, 1], some precau-
tions are necessary to prevent the amplifier from overloading. It is also sometimes
necessary to connect a small capacitor, typically 10s of pF, between the output of
−1
f (y)
x(0)
f (t)
y = f −1 (x)
−ẋ
−x
Fig. 5.1. Implementation of a Stielt- Fig. 5.2. Basic setup for generating an inverse function
jes integral
√
−x x
SJ
68 pF
Fig. 5.3. Typical square root circuit Fig. 5.4. Function generated by the computer setup
shown in figure 5.3 with −1 ≤ x ≤ 1
the operational amplifier and its summing junction if the basic setup shows signs
of instability.57
A good example of an inverse function is the square root circuit shown in figure
5.3. This is basically the same as the setup shown in figure 5.2 with two additional
parts. A diode and a small capacitor are connected between the output of the open
amplifier and its summing junction. The diode prevents saturation of the amplifier
when the input becomes positive as in this case the input to the amplifier and the
output of the multiplier will both be positive, thus driving the amplifier’s output
to its maximum negative level.
Although this does not harm the amplifier, driving it into saturation will gen-
erate an overflow in addition to yielding a nonsensical output value. Furthermore,
the amplifier may take some time to recover from this condition. This is especially
true for classic analog computers, which can exhibit recovery times of up to several
seconds. Note that the diode will keep the amplifier’s output a tad below zero due
to its forward threshold voltage – which is about 0.7 V for a standard silicon diode
and as low as 0.2 V for a Schottky diode.
The small capacitor prevents ringing when the amplifier’s output is near zero.
In some cases it could be omitted, depending on the behavior of the open amplifier
used. Figure 5.4 shows the behavior of this circuit. The input value has been
generated by a single integrator yielding a ramp running from −1 to +1. This
ramp has also been used for the x-deflection of the oscilloscope.
57 The capacitor provides a high frequency roll-off. Another approach to generating inverse
functions, which can also be applied to division, will be shown in section 5.16.
76 5 Special functions
− xy
x
5.2.2 Division
The square root circuit shown above can be easily modified to implement a division
operation. Figure 5.5 shows the basic setup: With the multiplier connected to the
output of the amplifier and to the input variable y, the open amplifier will drive
its output towards −x/y, so that the output of the multiplier yields −x which
cancels the current delivered through the x-input and its associated input resistor
to the inverting input of the amplifier.
As noted before, a small capacitor of about 68 pF should be connected between
the output of the amplifier and its summing junction to prevent unstable behavior
of the circuit. It should be also noted that this circuit does not perform a four-
quadrant division operation.
Some systems feature pre-wired modules to compute common inverse func-
tions such as square roots and divisions. Figure 5.7 shows an Analog Paradigm
MDS2 module containing two computing elements that can be switch selected to
perform either multiplication, division, or square rooting.
5.3 f (t) = 1/t 77
ẋ = −x2 (5.1)
Powers of τ and/or polynomials built from such powers are often required within
a computer setup. At first glance such powers can, of course, be generated by
means of multipliers fed with a linear ramp signal such as −1 ≤ τ ≤ 1, but this is
not always realistic as multipliers are expensive and analog computers, especially
historic ones, typically have only few of them. Furthermore, many multipliers such
as quarter square multipliers show non-negligible errors when their input values
approach zero.
Consequently, it is often more advisable to generate powers of a linearly time
varying variable τ by means of successive integration as shown in figure 5.10. This
setup yields the powers τ , −τ 2 , and τ 3 for −1 ≤ τ ≤ 1.58
This circuit can obviously be extended for higher powers of τ and some of
the coefficient potentiometers can readily be replaced by joining inputs of the
following integrator, i. e., a factor 0.2 in front of an input weighted with 10 can
be replaced by feeding the value into two connected inputs, each with weight 1,
etc. The successive powers of τ generated by such a setup can also be added
together (with optional changes of sign by means of summers) and coefficients to
form arbitrary polynomials. This is a useful technique to generate functions which
58 A similar setup has already been shown in figure 2.13 in section 2.3.
78 5 Special functions
−1
τ −1 (0)
+1 +1 +1
−1 0.2 10 0.3 10
τ −τ 2 τ3
Fig. 5.10. Generating successive powers of τ Fig. 5.11. Successive powers of τ gen-
erated by successive integrations
can be represented by a Taylor polynomial or the like. Figure 5.11 shows the
time-dependent functions generated by the setup shown above.
Sometimes, for instance when processing data gathered from experiments, a low
pass filter is useful. From a rather simplified point of view such a device lets only
frequencies below a certain threshold pass.59
Another application for such a circuit is the filtering of a (white) noise signal
obtained from a noise generator. Such signals, filtered and possibly shaped into a
suitable distribution (e. g., Gaussian), are extremely useful when the behavior of
mechanical systems exposed to random excitation is to be analyzed.
59 A more detailed analysis would take the shape of the filter’s passband as well as the
induced phase-shift, etc., into account.
5.5 Low pass filter 79
1
k0 T
f (t)
1
f (t) k0 T
Fig. 5.12. Basic circuit of a low pass filter Fig. 5.13. Typical behavior of two low pass
filters in series – note the mirror image rela-
tionships between successive traces due to a
sign change upon integration
Although some classic analog computers such as the EAI-231(RV) or the Tele-
funken RA 770 feature noise generators yielding a normally-distributed output
signal, such devices are rare nowadays. Noise generators intended for audio pur-
poses feeding a low pass filter may be used instead; this is described below. If a
normal distribution of the output signal is required, it can be readily obtained by
using a suitable function generator.
Implementing a low pass filter on an analog computer is extremely simple as
it just resembles a 1st -order time lag with the general transfer function
1
F (p) = ,
1 + pT
where T represents the time constant of the delay line.60 This setup, shown in
figure 5.12, basically yields a time varying mean value of its input signal. This
mean gets “better” as the time constant of the circuit increases – but this also
implies that the circuit needs a longer time span to settle on the mean value.
Typically, both coefficient potentiometers are set to identical values although
sometimes a certain deviation from the “ideal” setting may be advantageous de-
pending on the desired output signal.
Figure 5.13 shows a white noise signal ranging from 0 Hz to 100 kHz obtained
from a professional noise generator in the upper third of the screen. The curve
in the middle is the result of applying one low pass filter of the structure shown
above, while the lower trace results represents the output signal of a second low
pass filter connected to the output of the first filter.
60 Cf. [Giloi et al. 1963, p. 308] and appendix A for more details. p is an operator imple-
menting a time derivative.
80 5 Special functions
f (t)
1
T0
Diodes are very useful components as they let current pass only in one direction –
at least that is what an ideal diode should do. Real diodes have some nasty traits.
Most importantly they exhibit some forward threshold voltage, i. e., they won’t
start conducting exactly when their input crosses zero but only when it exceeds
the small voltage that is an intrinsic characteristic of every diode. These threshold
+1
> 0
+1 triangle
−1
< 0
square wave
SJ
Fig. 5.16. Triangle and square wave signal gen- Fig. 5.17. Idealized diode circuit
erated by the circuit shown in figure 5.15
voltages are typically between 0.2 V for Schottky diodes and can go up to 0.7 V
for a classic silicon diode.63
Using an open amplifier a real diode can be readily transformed into an almost
ideal diode using the setup shown in figure 5.17. The amplifier has two feedback
paths. One connects its output via a diode to its summing junction, thus limiting
its output to (near) zero when this diode starts conducting. The second path runs
through a second diode to one of the inputs of the summer, causing it to behave
like a classic inverter when this diode starts conducting. Accordingly, the output
of the amplifier will be higher than the output of the overall circuit by the amount
of threshold voltage of the diode used.
Figure 5.18 shows the behavior of this circuit with the diodes oriented as
shown. If both diodes are reversed, the circuit will behave as shown in figure 5.19.
In some cases it may be necessary to connect a small capacitor (about 68 pF)
between the summer’s output and its summing junction to avoid instabilities.
Obviously, a comparator with an associated (electronic) switch can also be
used to implement an ideal-diode look-alike. In this case, one input of the switch
63 Schottky diodes typically exhibit a larger reverse leakage current than classic silicon
diodes and should be avoided in this application.
82 5 Special functions
Fig. 5.18. Behavior of the circuit shown in Fig. 5.19. Behavior of the circuit shown in
figure 5.17 figure 5.17 with both diodes reversed
≈ 68 pF
SJ
x
|x|
SJ
≈ 68 pF
Fig. 5.20. Absolute value function with ideal Fig. 5.21. Absolute value
diode circuit
is left open or is grounded while the other is connected to the input signal, which
is also connected to the comparator input. Since all comparators exhibit some
hysteresis to ensure stable operation, this setup may result in non-negligible errors
of the diode-approximation. This setup should be avoided on a classic machine
which uses electro-mechanical relays in conjunction with its comparators because
relays exhibit significant variations in their closing time as well as contact bounce.
> 0
|x|
x
< 0
R
10
SJ
R
10
+1
−1
Fig. 5.23. Simple limiter circuit Fig. 5.24. Precision limiter circuit
5.9 Limiters
A limiter function limits a signal to an upper and lower bound. This is particularly
useful for the simulation of mechanical systems with stops or systems exhibiting
saturation effects. A very simple limiter circuit is shown in figure 5.23. At its heart
is a summer which acts as an inverter as long as neither of the two forward-biased
diodes conducts. This is the linear region of the limiter – its output follows the
input (with the typical sign-reversal). If its output reaches the threshold of either
of the two diodes shown, these will short-circuit the built-in feedback resistor of
the summer and thus limits its output to the forward-bias voltage set on the two
free potentiometers.
This circuit has two disadvantages. First, since it requires two free potentiome-
ters without output buffers, it is best suited for classic analog computers which
use simple voltage dividers as coefficient potentiometers. Second, the limits are
84 5 Special functions
not really “flat” – they exhibit some slope that may be detrimental to the overall
setup using this limiting function.
A much better limiter circuit is shown in figure 5.24. At its heart is a diode
bridge controlled by two voltages representing the upper and lower limits. The two
resistors should be about a tenth of the value of the input resistor of the inverter
following the bridge circuit. This circuit exhibits excellent flatness of its output
signal in the limited state, as shown in figure 5.25, and is highly recommended.
A very simple, yet often practical limiter circuit just employes two suitable
Zener-Diodes with their cathodes (or anodes) connected between the output of
a summer or integrator and its respective summing junction.
Many mechanical systems, such as gear trains, linkages, etc., exhibit a phe-
nomenon called backlash, which is caused by unavoidable gaps between moving
parts. A dead zone circuit is used to model this behavior on an analog computer.
The literature contains a plethora of such circuits – the basic idea is to use
a pair of suitably forward-biased diodes feeding a summer. The simplest form of
such a circuit consists of just two diodes, two free potentiometers and a summer.
This simplicity comes at a cost because the setup of the potentiometers is not
straightforward and the diodes are far from being ideal.
The circuit shown in figure 5.26 replaces these two real diodes by idealized
diodes as shown in section 5.7. The parameters r and l define the right and left
break-point of the dead zone. It may be necessary to parallel the two diodes
connected to the summing junctions of their associated open amplifiers with small
capacitors to avoid unstable behavior of the circuit.
Figure 5.27 shows the behavior of this dead zone setup. A linear voltage ramp
was used for input. The left and right gradient of the curve are determined by the
weights wr and wl of the output summer’s inputs.
Here, too, Zener-diodes may be employed, if a very simple dead zone circuit is
sufficient. In this case two of these diodes with their cathodes or anodes connected
are just placed before a suitable input of a summer or integrator.
5.11 Hysteresis 85
SJ
−1 r
wr
wl
+1 l
SJ
Fig. 5.26. Precision dead zone circuit Fig. 5.27. Behavior of the dead zone circuit
shown in figure 5.26
5.11 Hysteresis
A system exhibiting a hysteresis has some kind of “memory”, i. e., its output does
not only depend on its current input but also on its previous state(s). Many nat-
ural systems, such as some magnetic materials, exhibit a hysteresis effect, which
is also deliberately built into devices such as simple thermostats, etc. Regard-
ing dead zone functions, the literature contains a wealth of circuits implement-
ing various forms of hysteresis; see for example [Howe 1961, p. 1482 et seq.],
[Giloi et al. 1963, p. 210], or [Hausner 1971, p. 142/145 et seq.].
The circuit shown in figure 5.29 implements rectangular hysteresis. It is simple
but requires an electronic switch. The inherent hysteresis of the comparator is
typically negligible but should at least be considered. The parameters a and b
define the upper and lower limits of the output while α shifts the hysteresis “box”
around the origin. As simple as this circuit is it has a flaw. The summer can (and
will) go into overload. Accordingly, it is advisable to limit its output by a pair of
Zener-diodes connected between its output and its summing junction to suppress
an overload condition which might halt the computer operation, depending on the
run mode.
Some problems require functions max(x, y) or min(x, y) which can be easily im-
plemented using a comparator with electronic switch and an inverter as shown in
figure 5.28.
86 5 Special functions
x
> 0
max(x, y) > 0
min(x, y)
< 0
x < 0
y y
+1
> 0
−1 a −1
+1 b
< 0 γ
α
±1
Fig. 5.29. Simple hysteresis with comparator Fig. 5.30. Bang-bang circuit
and electronic switch
5.13 Bang-bang
The bang-bang circuit gives one of two values at its output depending on its input
voltage. This can be readily implemented by means of a comparator with an asso-
ciated (electronic) switch as shown in figure 5.30. The inputs to the comparator’s
switch are connected to potentiometers which define the upper and lower limit
of the circuit’s output. One input of the comparator is connected to the input
signal while the other one is fed with a threshold value γ. If this second input is
grounded or omitted, the circuit will switch when the input x crosses zero. In this
case the circuit effectively implements the sign(x) function.
Figure 5.31 shows the dynamic behavior of this simple bang-bang circuit. It is
fed with a triangle wave generated by the circuit shown in section 5.6. The value
γ is set to 0.75 and defines the duty cycle of the rectangular output signal, so to
speak. If γ is a variable, this setup can be used as a simple modulation circuit.
5.14 Minimum/maximum holding circuits 87
> 0
< 0
Fig. 5.31. Behavior of the bang-bang Fig. 5.32. Simple minimum holding circuit
circuit
During the study of stochastic systems, but not restricted to this, it is often useful
to have a minimum/maximum holding circuit to store extreme values of a signal
for later analysis or for use in a subsequent computer run. A plethora of circuits to
accomplish this can be found in the literature, see [Ammon 1966, p. 114 et seq.] or
[Hausner 1971, p. 146], etc. These circuits often make use of idealized diodes as
described in section 5.7 requiring an open amplifier with the associated stability
problems.
If the computer being used has comparators with (fast) electronic switches,
very simple minimum/maximum holding circuits can be devised. Figure 5.32 shows
a typical minimum holding circuit – the electronic switch connects the input of
the integrator to the output of the inverting summer whenever the value at the
integrators output is larger than the value at the summer’s output. By interchang-
ing the two inputs of the electronic switch, the minimum holding circuit can be
transformed into a maximum holding device.
Generally, it is advisable to have the integrator set to a time scale factor
as large as possible, so that it will take on new values as quickly as possible.
Depending on the behavior of the integrator it may show quite significant drift with
a setting like k0 = 103 and using an input weighted by 10. Most electronic switches
will also exhibit some small error increasing the drift in this setup. Connecting
the free input of the switch to ground as shown can lessen this error.
Figure 5.33 shows a typical output yielded by this setup. The upper trace is a
random input signal, the middle trace shows the output of the minimum holding
circuit while the lower trace shows the signal controlling the electronic switch.
Nevertheless, if computations take place at high-speed, the tiny but unavoid-
able hysteresis of any comparator circuit will render this simple circuit pretty
useless as the integrators will tend to overshoot. A much better but more complex
circuit is shown in figure 5.34. At its heart is an idealized diode.64 It is often a
64 Changing the direction of both diodes will change this circuit from a peak detector to a
floor detector, so to speak.
88 5 Special functions
Fig. 5.33. Behavior of the simple minimum holding circuit shown in figure 5.32
Trigger
k0 = 102
10 SJ
10 30
10 10
Fig. 5.34. Improved peak detector Fig. 5.35. Simple sample and hold circuit
good idea to keep k0 as low as possible with k0 = 102 being a typical value and
use inputs (or a parallel circuit of several inputs) with as large an input weight as
possible. Increasing k0 instead may yield unstable behavior.
A sample and hold circuit is used to sample an input signal and hold this value
until another sample is requested. A simple circuit for such a device is shown in
figure 5.35.65 It relies on an (electronic) switch that is triggered by an external
signal (in this case applied to its associated comparator) feeding an integrator. The
input of this switch is driven by an error signal which results from the difference
between the current input signal and the value stored in the integrator. When the
switch is closed the integrator will drive this error signal to zero, thus yielding the
last input value at its output.
It is desirable to have the integrator set to a very large time scale factor such
as k0 = 103 or larger using inputs weighted by 10 or even 100 in order to follow
ModeOP ModeIC
Input
OP OP IC HALT
Output ModeIC −1 +1 −1 +1
ModeOP −1 −1 +1 +1
Fig. 5.36. Sample and hold circuit with an integrator individually controlled according to the table
the input signal with as little time lag as possible. The electronic switch may be
substituted by a multiplier as long as this element has a negligible balance error.
If the analog computer being used allows individual mode control of integra-
tors, a sample and hold circuit can be setup even more easily, as shown in figure
5.36. Many classic analog computers have a digital expansion system where the
mode control lines of at least some of the computer’s integrators are available.
The Analog Paradigm INT4 module shown previously has four integrators, two of
which can be externally controlled by means of jacks labeled ModeIC and Mode-
OP. The table included in the figure shows the four control signal combinations
that can be applied to an integrator regardless of the current mode of operation
selected on the computer’s control unit.
To use an integrator as sample and hold circuit the ModeOP input should be
tied to +1. Setting ModeIC to −1 will then track the input signal while applying
+1 to this input will hold the last value. Typically this works well because the
initial condition circuitry features an input weight of at least 100 resulting in an
overall time scale factor of 105 with k0 = 103 .
In some cases where the tracking behavior of the integrator while being in IC-
mode is undesirable, two integrators can be connected in series with the output
of the first one feeding the IC input of the second. Using clever external mode
controls this forms a simple bucket brigade circuit which yields a step-wise signal
at its output.
In rare cases it can be necessary to compute the time derivative of a variable. How-
ever, this operation should generally be avoided at all costs because – in contrast
to integration – it increases the signal noise considerably.66 Nevertheless, when
66 That is one of the main reasons why analog computers feature integrators instead of
differentiators although solving differential equations is possible either way.
90 5 Special functions
67 Cf. [Giloi et al. 1963, p. 160 et seq.] for details on differentiation using analog computers.
68 This circuit is quite similar to the DC block circuit described in section 5.5.
5.17 Time delay 91
Fig. 5.38. Time derivatives of a triangle signal generated by the setup shown in figure 5.15 with
upper frequency limit set correctly (second from top), too low (second from bottom), and too high
(bottom)
This circuit can be used more generally to replace an open amplifier when
creating an inverse function such as square rooting, division, etc. A setup for
division using this approach is described in [Giloi et al. 1963, p. 168].
t=0 T
Although delays belonging to the third class would seem to be the most desirable
in an analog computer setup their implementation is quite demanding. In addition
to that, these approaches exhibit a number of deficiencies (which will be discussed
below) that have to be taken into account in an actual simulation setup. Delays
of the first or second class are often preferred, despite their shortcomings.
69 More details on this can be found in [Howe 1961, p. 225 et seq., p. 261].
5.17 Time delay 93
Fig. 5.40. Time delay circuit according to [Thomas et al. 1969, p. 638]
representation but operating in discrete time. Its modern equivalent is the bucket
brigade device, described in section 5.17.3.
The following sections will discuss the three basic classes of delay circuits and
provide some practical implementation examples which are suitable for modern
high-speed analog computers.
5.17.2 Digitization
Fig. 5.41. Classic EAI time delay unit Fig. 5.42. Clock control of a sample
and hold circuit
The main disadvantage of this approach is that time as well as the signal
values are necessarily quantized. The time discretization is determined mainly by
the speed of the ADCs/DACs, the amount of available memory, and the maxi-
mum required delay time T . The signal value discretization is determined by the
resolution of the ADCs/DACs. Although such circuits are readily available with
16 bits of resolution, careful layout of the printed circuit board is required. If
these discretizations are acceptable, an Arduino®-based digital delay circuit can
be considered.
A classic example of such a digital delay unit dating back to the late 1970s
is shown in figure 5.41. If such a device is readily available, it should be given
precedence over all other implementation variants because these typically require
a lot of computing elements and will also exhibit signal distortion that in some
cases may not be acceptable.
The sample and hold circuits shown in figure 5.43 are representative of the second
class of delay systems. Such delay networks consist of an even number of individ-
ually controlled integrators (denoted by the small black stripe) grouped in two
groups, labeled I and II. These two integrator groups are then alternately toggled
between IC and HALT mode by means of a two-phase clock signal as shown in
figure 5.42.
Each group alternately cycles through the IC- and HALT-modes, thus moving
the value at the input of this integrator chain in a step-wise fashion from left to
right with the delay time T depending on the clock frequency and the number of
double integrator stages. Circuits like these have been used extensively to imple-
5.17 Time delay 95
II II II II
I I I I
in out
Fig. 5.44. Basic bucket brigade circuit as implemented in the MN 3007 integrated circuit (cf.
[Panasonic 3007, p. 35])
ment sampled data systems and difference equations,70 but they are not directly
suitable to implement a highly granular delay circuit due to the staggering number
of integrators that would be required.
A more suitable implementation of such an integrator chain is called a bucket
brigade device, BBD for short. This name is quite descriptive as they are imple-
mented by a number of capacitors connected as shown in figure 5.44.71 These
capacitors are interconnected by field effect transistors (FET s), which are con-
trolled by a two-phase clock signal, CP1 and CP2, controlling the odd and even
numbered FETs respectively. A voltage applied to the input IN is stored in the
first capacitor of the bucket brigade by a clock pulse on CP2. This pulse is fol-
lowed by a pulse on CP1 which transfers this value from the first capacitor to the
second and so on. Since the number of capacitor stages is fixed72 the desired delay
is achieved by adjusting the frequency of the two-phase clock signal.73
The complementary output stage shown in figure 5.44 is noteworthy. Although
one of the two output connections would suffice, the output signal is typically
derived from this device by adding both output currents from OUT1 and OUT2
before feeding the combined signal to an output filter to suppress any spurious
high-frequency components introduced by the two-phase switching processes in
the bucket brigade.
The delay obtained by a certain clock frequency is given by
N
tdelay = ,
2fclock
where N denotes the number of bucket brigade stages and is typically of the form
N = 2n with n ∈ N, 8 ≤ n ≤ 11. The clock frequency is limited by fmin ≤
fclock ≤ fmax . In the case of the MN3007 integrated circuit fmin = 10 kHz and
fmax = 100 kHz with N = 1024. The upper clock frequency limit is mainly due
to the high capacitance of the clock inputs CP1 and CP2 whilst the lower limit is
due to the inevitable leakage currents of the FET-controlled capacitors.74
With these constraints a MN3007 based delay circuit is nominally capable of
delay times 5.12 ms ≤ tdelay ≤ 51.2 ms.75
The resulting term e−sT is called the Laplace delay operator. Practical de-
lay circuits can now be implemented by interpreting this operator as a transfer
system77 which is described by its transfer function. Generally, given a time-
dependent input/output signal pair yi (t) and yo (t) the behavior of the transfer
system can be described by
yo (s)
F (s) = (5.3)
yi (s)
with
yo (s) = L(yo (t)) and yi (s) = L(yi (t))
74 If some degree of signal degradation is acceptable, fmin = 1 kHz may be acceptable, too.
75 With fmin = 1 kHz the maximum delay time can be as long as 512 ms.
76 Cf. [Carslaw et al. 1941, p. 7].
77 The following sections are largely based on [Giloi et al. 1963, p. 288 et seq.].
5.17 Time delay 97
being the Laplace transforms of the time varying input and output signals. Such
transfer systems can be grouped into two classes:
Only systems of the first class will be considered here. These can be described by
a rational function of the form
m
bk sk
P
k=0
F (s) = n . (5.4)
P
ak sk
k=0
which can now be used to derive an analog computer program for a given transfer
system like the delay function.
Unfortunately, the naïve approach
∞
−sT
X (−1)k (sT )k
e =
k!
k=0
is not particularly well suited for an analog computer program since it requires
an exorbitant amount of circuitry, even for a rough approximation. Ameling78
gives some interesting additional series expansions which, unfortunately, are also
not very useful for a practical implementation.
An interesting approach is to abandon the direct implementation of a power
series representation of the delay operator and to use its macroscopic behavior in-
stead to derive a suitable approximation. With s = iω the Laplace delay operator
satisfies
|e−iωT | = 1,
The Laplace delay operator may now be approximated quite simply by the
following first-order Padé approximation:79
sT
yo (s) 1− 2 2 − sT
P1 (s) = = sT
=
yi (s) 1+ 2
2 + sT
Cross multiplying the two fractions yields
yo (s)(2 + sT ) = yi (s)(2 − sT ).
− T2 (yo + yi )
yi yo
2 α
αT yi
yo
2 α
αT
−yo α
Fig. 5.45. 1st -order Padé approximation for the Fig. 5.46. Simplified 1st -order Padé approxima-
delay operator tion
from which
Zt
T
(yo + yi ) = (yi − yo ) dt
2
0
s2 T 2 s2 T 2
sT sT
yi (s) 1 − + = yo (s) 1 + + ,
2 12 2 12
Fig. 5.47. Response of the 1st -order Padé approximation to three different input signals, sin(ωt),
1 − e−λt , and a step-function (from left to right, actual measurements)
84 See [Giloi et al. 1963, p. 297], [Carlson et al. 1967, p. 225 et seq.], or [Cunningham 1954]
for a collection of suitable analog computer setups.
5.17 Time delay 101
yi −
Rt
+ yi ) dt
0 (yi 6 α
αT
yo β
12
βT 2
2
RR t
0 (yi − yo ) dt
Fig. 5.48. 2nd -order Padé approximation for the delay operator – typical values for α and β are
20 and 100, respectively
Fig. 5.49. Response of 2nd -order Padé approximation to three different input signals, sin(ωt),
1 − e−λt , and a step-function (from left to right, actual measurements)
y(t)
5 Special functions
1
∆t
≈ −y(t − ∆t)
10 10 10 20
1 7 9
5∆t 15∆t 10∆t
1
∆t
1
∆t
as introduced before are an extremely useful tool in many branches of science and
technology. Accordingly it is of interest to find an easy way of deriving an analog
computer program from these. Stable systems satisfy m ≤ n and ak ≥ 0.
Following the reasoning outlined in section 5.17 transfer functions can be
mechanized generally as shown in figure 5.51.85
Zt1
1
x= x(t) dt
t1 − t0
t0
Although this approach works fine on an analog computer it requires fixed times
t0 and t1 , which is only practical in few cases.
To overcome this problem [Otterman 1960] introduced exponentially-mapped-
past (EMP) variables in order to extend the idea of an arithmetic mean to con-
tinuous variables in continuous time on which the classic EAI application note
[EAI 1.3.2 1964] is based.86 The basic idea is to introduce a weighting function
ensuring that recent values influence the result more strongly than past values.
...
..
.
yi (t) ...
b0 b1 b2 b3 bm−1
bm
5 Special functions
...
1
an
a0 a1 a2 a3 an−1
..
.
... yo (t)
..
.
...
Fig. 5.51. Analog computer setup for a transfer function as shown in equation (5.6)
5.19 Exponentially mapped past 105
The following equation demonstrates this technique with the integral running
from the most distant past −∞ to 0 (=“now”):87
Z0
x
e(0) = α x(t)eαt dt
−∞
Z0
1
eαt dt =
α
−∞
ZT ZT
−αT
x
e(T ) = α x(t)e α(t−T )
dt = αe x(t)eαt dt, (5.7)
−∞ −∞
Based on (5.8) the analog computer setup shown in figure 5.52 can be directly
derived. This basically implements a leaky integrator, which can also be seen as a
low-pass RC filter. It should be noted that this only works if no exact estimation
of the mean value is required during the startup time of the computation. After a
step input, the output will reach 95% of the step height in the time interval 3/α.
This must be taken into account for the startup time.
This approach can be extended to the calculation of an EMP variance. In the
discrete case the variance is defined by
n
1 X
σ2 = e)2 .
(xi − x
n−1
i=1
x(t) α
xe(t)
Fig. 5.52. EMP mean circuit – the parameter α determines how quickly the weigthing function
“forgets” past input values
α α
Fig. 5.53. EMP variance circuit
ZT ZT
2 α(t−T )
f2 (T ) = α
σ e(t)) e
(x(t) − x dt = αe −αT
e(t))2 eαt dt.
(x(t) − x
−∞ −∞
as shown in figure 5.54 where τ represents the time delay used for the correlation.
The time delay function shown can be implemented using various techniques such
as those shown in section 5.17.
The Wiener-Khinchin theorem states that the spectral decomposition of the
autocorrelation function of a suitable function is given by the power spectrum of
that function.88 Thus, it is possible to compute an EMP power spectrum based
on ρe(τ ).
The EMP Fourier transform is defined as
ZT ZT
−α(T −t) −iωt −iωt
Fe(ω) = α x(t)e e dt = αe x(t)e−α(T −t) eiω(T −t) dt.
−∞ −∞
88 Questions regarding convergence criteria are beyond the scope of this section.
5.19 Exponentially mapped past 107
x(t)
α
ρ(τ
e )
Time delay
α
Fig. 5.54. EMP autocorrelation circuit
Pe (ω)
x(t) α
ω
α
α
ω
−∞
2
ZT
x(t)e−α(T −t) sin(ω(T − t)) dt .
−∞
The corresponding analog computer setup is shown in figure 5.55. At its heart is a
simple quadrature oscillator consisting of two integrators and a summer in a loop.
This yields the sine and cosine components, the squares of which are summed to
yield the desired output.
Examples
6
The following chapter describes a variety of problems of differing complexity which
have been solved using an analog computer. Reproducing these solutions on an
analog computer is not only highly instructive but also very rewarding and, last
but not least, fun.
This first example is based on the idea outlined in section 5.4 and details how a
polynomial
p(x) = ax3 + bx2 + cx + d
with coefficients a, b, c, and d in [−1, 1] can be displayed on an oscilloscope allowing
a user to change the coefficients while immediately observing the effects on the
polynomial.
This requires a little trick circuit to implement coefficients in [−1, 1] as shown
in figure 6.1, which will be used in the polynomial circuit multiple times. With
0 ≤ α ≤ 1 the output of this circuit will vary linearly between [−x, x].
The necessary terms x, x2 , and x3 can be obtained by successive integration
over a suitable constant τ , which is linked to the OP-time of the analog computer
running in repetitive mode. Figure 6.2 shows the overall program for displaying
a polynomial of third degree. Ideally, an oscilloscope in x/y-mode is used with x
and p(x) as its respective input voltages. Figure 6.3 shows three representative
screenshots obtained with this program.
110 6 Examples
x α [−x, x] with
0≤α≤1
+1 +1 +1
x −x2 x3
−1 τ
+1
d c a
2
−x bx b
−x3
−1 cx
ax3
p(x)
Order 0: The speed of the reaction is constant. This is typically the case when
the substances involved are available in abundance.
6.2 Chemical kinetics 111
−1
a(0)
k1
b
b
a
k2
Fig. 6.4. Computer setup for the system Fig. 6.5. Typical simulation result for the reaction
k2 k2
A −===
−B A −===
−B
k1 k1
−1
a(0)
k c
b
c
a
b(0)
b
−1
Fig. 6.6. Setup for the equation ȧ = ḃ = −ċ = −kab Fig. 6.7. Typical simulation result
−1
a(0) a
c
b
k1
k2 c b
k1
Fig. 6.8. Computer setup for the reaction A −→ Fig. 6.9. Simulation result for the reaction
k2 k1 k2
B −→ C A −→ B −→ C
6.2 Chemical kinetics 113
ṡ = −k1 es + k2 ε
ė = −k1 es + (k2 + k3 )ε
ε̇ = k1 es − (k2 + k3 )ε
ṗ = k3 ε
ȧ = −k1 aλ
ḃ = k1 aλ − k2 bλ = λ(k1 a − k2 b)
ċ = k2 bλ − k3 cλ = λ(k2 b − k3 c)
d˙ = k3 cλ − k4 dλ = λ(k3 c − k4 d)
ė = k4 dλ
λ̇ = −λ(k1 a + k2 b + k3 c + k4 d)
92 See [Röpke et al. 1969, p. 105 et seq.] and [Knorre 1971, p. 105 et seq.] for more details.
93 In a typical scaled computer setup, k1 is about 20.
94 This example is based on [Wagner 1972, p. 23 et seq.].
114 6 Examples
−1
s(0)
k1
e(0)
k2
−1
p
ε
s
e
p k3 ε
Fig. 6.10. Computer setup for the Michaelis- Fig. 6.11. Typical simulation result for the
Menten kinetics Michaelis-Menten kinetics
The classic SEIR model of an infectious disease consists of four sets of subpop-
ulations: The set of susceptible persons S, the set of exposed persons E, the
infected persons I, and the recovered (or removed, in case of death) persons R.
This model95 is described by
Ṡ = −βSI
Ė = βSI − αE
I˙ = αE − γI
Ṙ = γI
95 See [Schaback 2020] for a recent example of this and other systems to model COVID-19.
[Bracher et al. 2021] details on the problems of predictions for epidemics.
6.3 SEIR model 115
−1
a(0)
k1
−1
b(0)
k2
−1 −1
λ(0) c(0)
k3
−1
d(0)
k4
Fig. 6.12. Computer setup for the multi-stage methane chlorination reaction
116 6 Examples
−1
Ṡ = −βSI S
Ė = βSI − αE E
I(0) −1
I˙ = αE − γI I
Ṙ = γI R
S R
E
I
I = mr2 .
Since the sum of all forces in a closed system must be zero it follows that
X
I φ̈ = τi (6.1)
i
where the τi denote the various torques acting in the system. These torques are
97 This example was inspired by the numerical simulation of such a driven pendulum shown
in https://www.myphysicslab.com/pendulum/pendulum/chaotic-pendulum-en.html, retrieved
on June 16th , 2019.
118 6 Examples
−1
−1
e(t)
SJ
+1 ω
β
φ̈ −φ̇
φ̇ dt
R
sin g/r
e(t) A
mr 2
g A cos(ωt) − β φ̇
φ̈ = − sin(φ) + .
r mr2
As the highest derivative is now isolated on the left-hand side, this form is ideally
suited to applying Kelvin’s feedback technique to derive an analog computer
setup.
First, a harmonic forcing function with a variable frequency, such as the sweep
circuit described in section 4.2, is required. The circuit used is shown in figure 6.15.
The sine-term in the expression (6.2) can be generated either by the subcircuit
shown in figure 4.17 in section 4.4 or by a Taylor approximation, as already
described in section 4.4.98 The overall analog computer program for the damped
pendulum subjected to a forcing function is shown in figure 6.16.
This setup invites playing with the parameters β, A/mr2 , g/r, and ω. Gener-
ally, the behavior of such an oscillatory system can be visualized best by a phase
space plot, or phase diagram, which visualizes all possible states of the system.
Figure 6.17 shows the chaotic behavior of the system for a certain parameter set-
ting. Here φ̈ and −φ̇ were plotted “against” each other, i. e., these variables were
to the x- and y-inputs of the oscilloscope.
ÿ + (a − 2q cos(2t)) y = 0
with the initial conditions y(0) = 1 and ẏ(0) = 0. Following [EAI 1964], the pa-
rameter a is removed by letting a := 2q. Defining a function
ÿ + axy = 0. (6.4)
As usual, the time t as parameter of the function x(t) has been omitted here and
below.
99 Cf. [Ruby 1996], [Yokogawa, p. 17 et seq.], and [Giloi et al. 1963, p. 196 et seq.] for more
examples and details. This section is mainly based on [EAI 1964].
120 6 Examples
At first sight, one might be tempted to implement the function x using a diode
function generator or a suitable approximation, but due to the fixed argument
range of a function generator this approach is not viable. As is frequently the
case, it is much better to generate the required function by solving a suitable
differential equation. Differentiating (6.3) twice yields
This results in the following differential equation which has (6.3) as a solution
ẍ + 4x = 4. (6.7)
This is demonstrably correct, as (6.5) and (6.6) can be substituted into (6.7) to
give
4 cos(2t) + 4 (1 − cos(2t)) = 4.
As a first step in deriving a computer setup for (6.4) equation (6.7) first must
be converted into an unscaled analog computer program. It can be written as a
system of equations which quite closely resemble a program:
Z
ẋ = ẍ dt
Z
x = ẋ dt (6.8)
ẍ = 4 − 4x (6.9)
Now, only ẋ is still unscaled and must lie in the range of [−2, 2] due to (6.5):
Z
x = 2b̈
ḃ x dt
Z
x
b = ḃx dt
x = 1 − 2x
b̈ b (6.13)
The resulting computer setup for these equations where no function exceeds the
interval [−1, 1] is shown in figure 6.18. Keep in mind that the output signal has
been scaled down from the interval [0, 2] to [0 : 1] – a fact that has to be taken
into account during the scaling of (6.4).
Now, equation (6.4) must be scaled accordingly. With 0 ≤ x ≤ 2 a static
variant ÿ + 2ay = 0 suitable for the further scaling process is obtained. This is a
harmonic oscillator with a solution like
y = y(0) cos(ωt)
ẏ = −y(0)ω sin(ωt).
Accordingly, a scaling factor of 1/5 is required for ẏ. Due to the fractious behaviour
of the Mathieu equation, another factor of 1/5 is taken into account yielding an
overall scaling factor of 1/25 and thus
Z
1
ẏˆ = ÿ dt
25
Z Z
1/5
ŷ = ẏˆ dt = 5 ẏˆ dt
1/25
ˆ
ÿ = 10aẏ. (6.14)
The factor 10 in (6.14) can now be distributed over the equation. Introducing
an additional scaling factor 1/10 for a to simplify setting this parameter finally
yields the program shown in figure 6.19, which allows for values 0 ≤ a ≤ 10 except
for the regions where the system is unstable.
Figures 6.20 shows a collection of typical solutions for Mathieu’s equation
for increasing values for a. These solutions were obtained with the integrator time
scale factor set to k0 = 103 and the computer running in repetitive operation.
The oscilloscope was set to 2 ms per division horizontally and 2 V per division
vertically.
122 6 Examples
−ḃ
x y(0)
10 x
b
0.8 10 0.5 10 y
a
10 0.2 10
0.2 x
b̈
x̂
−1
Fig. 6.18. Scaled setup for equation (6.7) Fig. 6.19. Scaled setup for equation (6.4)
The van der Pol equation, named after Balthasar van der Pol in 1920 as a re-
sult of his pioneering work on vacuum tubes, describes an oscillator with damping
behavior which depends on the amplitude of its output. For small amplitudes the
damping is negative, so the amplitude will rise until it reaches a certain threshold
at which the damping term becomes positive resulting in an automatic amplitude
stabilization.100
The basic form of the Van der Pol equation is
ÿ + µ y 2 − 1 ẏ + y = 0
ÿ = −y − µ y 2 − 1 ẏ.
100 See [van der Pol et al. 1928] for more details on this type of oscillator. In this paper,
van der Pol and his co-author J. van der Mark develop an electronic model for a heart.
6.6 Van der Pol’s equation 123
Fig. 6.20. Typical solutions of Mathieu’s equation for some values 0 ≤ a/10 ≤ 1
124 6 Examples
−ẏ y y2
ÿ −1
Fig. 6.21. Unscaled computer setup for the van der Pol equation
+1
1
10
−ḃ
y yb yb2
1 1
10 10
4 5
y
b̈
1 1
5 −1 4
4 µ
10
5 10
Fig. 6.22. Final computer setup for the van der Pol equation
The next step involves scaling of y 2 , which lies in the interval [0, 4]. Accord-
ingly, the scale factor of 4 at the output of the multiplier just introduced above
will be removed and this will be compensated for at the input of the following
summer and so forth.
Executing this process for all variables within the program yields the following
set of equations with auxiliary variables y1 , y2 , and y3 , which can be readily
converted into a scaled computer setup as shown in figure 6.22:101
Z
10 b̈
−y = −
ḃ y dτ
4
Z
yb = −2 −ḃ y dτ
1
yb1 = − − + yb2
4
yb2 = −ḃ
yy1
101 Please note that the sign-inverting feature of integrators and summers has already been
taken into account in this set of equations.
6.7 Generating Bessel functions 125
Fig. 6.23. Typical phase space plot of the solution of van der Pol’s differential equation
4
yb3 = yb2 µ
5
yb
y=−
b̈ + 10yb3
5
As before, a phase space plot will be used to show the behavior of the oscillat-
ing system. −ḃy and yb were used to create the display shown in figure 6.23. It can
be seen clearly how the amplitude builds up over only one period starting from a
small initial value for yb near the origin of the display. Running the computer in
repetitive mode allows the observation of the influence of various settings for µ.
Figure 6.24 shows a classic exercise problem regarding the generation of Bessel
functions on an analog computer.102 These functions were first described by
Daniel Bernoulli103 and later generalised by Friedrich Bessel.104 Bessel
functions of the first kind are usually denoted by Jn (t) and are solutions of the
Bessel differential equation
Sometimes these are called cylindrical harmonics. The parameter n in the equation
above defines the order.105
Fig. 6.24. Classic exercise regarding analog computer solutions for Bessel functions
+1
ẏ −y
J0 (τ )
λẏ
τ
λ J1 (τ )
λẏ
−
τ
τ̇ −1
τ
Fig. 6.25. Analog computer program for generating Bessel functions J0 (τ ) and J1 (τ )
Figure 6.26 shows the overall program setup on THE ANALOG THING.106
A typical result is shown in figure 6.27. Note that the program has not been
properly scaled. τ̇ was set according to the operate-time of the machine running
in repetitive mode. The central scaling factor λ was set by trial and error to get
the desired result.
ℏ ∂ 2 Ψ(x)
− + (V (x) − E) Ψ(x) = 0 (6.17)
2m ∂x2
with
h
ℏ=
2π
denoting the reduced Planck constant, m being the mass of the nonrelativis-
tic particle under consideration, V (x) representing the potential energy (i. e., the
Fig. 6.26. Setup for generating J0 (τ ) and J1 (τ ) Fig. 6.27. Typical output for J0 (τ ) and J1 (τ )
depth of the potential well), and E being the energy of the particle. Ψ(x) repre-
sents the probability amplitude, which depends on the x-coordinate of the one-
dimensional system being examined. Solving (6.17) for the highest derivative yields
∂ 2 Ψ(x) 2m
= (V (x) − E) Ψ(x).
∂x2 ℏ
To solve this problem on an analog computer x will be represented by the
integration time basically yielding
Ψ̈ = ΦΨ (6.18)
with
2m
(V − E)
Φ :=
ℏ
omitting the function arguments (t) instead of (x) as usual to simplify the notation.
Equation (6.18) can be directly converted into the unscaled analog computer
program shown in figure 6.28. Its input is the time-dependent function Φ describing
the potential well yielding the probability amplitude Ψ as well as Ψ2 at its outputs.
The initial conditions for this function are set with the potentiometers Ψ̇(0) and
Ψ(0).
The computer has been run in repetitive operation with an OP-time of 20 ms
and a time scale factor of k0 = 102 set on all integrators. The input function Φ
resembles a square trough and is generated with the circuit shown in figure 6.29.
The integrator on the left yields a linear ramp function running from −1 to +1,
which is fed to a series-connection of two comparators with electronic switches.
6.8 Solving the one-dimensional Schrödinger equation 129
+1 −1
Ψ̇(0) Ψ(0)
k0 = 103
Φ Ψ̈ −Ψ̇
Ψ2
k0 = 103
+1
k0 = 102
−1
−1
E
<0
Φ
+1 V0
>0
l r
+1 −1
Using the coefficient potentiometers labeled l and r the left and right position of
the trough’s walls can be set. The height and depth of the trough are set by the
coefficients E and V0 , finally yielding Φ.
Figure 6.30 shows a typical result from an unscaled simulation run.108 The
trough parameters l and r were set to yield an approximately symmetric trough,
which is shown in the upper trace. The two following curves show Ψ and Ψ2 . Here
Ψ(0) was assumed to be zero while Ψ̇(0) was set experimentally so that the two
integrators in figure 6.28 did not go into overload.
One of the big advantages of an analog computer, namely the ease with which
parameter variations can be tested, becomes very clear in this program. Varying
E, V0 , Ψ̇(0), and Ψ(0) gives a good sense for the behavior of the one-dimensional
Schrödinger equation.109
δ(v) = rv 2 (6.19)
with p
v= x2 + y 2 .
This is a bit oversimplified as, according to Siacci for example, a realistic drag
function for a historic projectile has the form111
p 0.0442v(v − 300)
δ(v) = 0.2002v − 48.05 + 9.6 + (0.1648v − 47.95)2 + .
v 10
371 + 200
Nevertheless, (6.19) will suffice for the following example. The general equations
of motion of the projectile in this two-dimensional problem are
δ(v)
ẍ = − cos(φ) and (6.20)
m
δ(v)
ÿ = −g − sin(φ) (6.21)
m
δ(v)
ẍ = − ẋ
v
δ(v)
ÿ = −g − ẏ
v
The corresponding computer setup is shown in figures 6.31 and 6.32. The
upper and lower halves of the circuit are symmetric except for the input for the
gravitational acceleration to the lower left integrator which gives ẏ. The velocities
ẋ and ẏ are fed to multipliers to give their respective squares, which are then
summed and square rooted to get v since δ(v)/v = rv according to (6.19).
The parameters α1 and α2 are scaling parameters that are set to give a suitably
scaled picture on the oscilloscope set to x, y-mode. Figure 6.32 shows the actual
setup and parameterization yielding the result shown in figure 6.33. The initial
conditions satisfy ẋ(0) = cos(φ0 ) and ẏ(0) = sin(φ0 ) with φ denoting the elevation
of the cannon. In the screenshot shown φ0 was set to 60◦ .
This example uses an analog computer to simulate the path of a charged particle
traversing a magnetic field.112 Basically, a particle with charge q moving in a
magnetic field B
⃗ is subjected to a force F⃗Lorentz which is perpendicular to both,
the magnetic field B
⃗ and the direction in which the particle moves, yielding
⃗ = q ⃗r˙ × B
F⃗Lorentz = q ⃗v × B ⃗ .
⃗v denotes the velocity and ⃗r the position of the particle under consideration. In
addition to this,
F⃗particle = m⃗r¨
112 This is based on [Telefunken/Particle], which was probably written by Ms. Inge Bor-
chardt, who did the trajectory simulations at DESY for the high-energy particle accelerators
– initially on an EAI 231RV analog computer and subsequently on a Telefunken RA 770 hybrid
computer.
+1 +1
132
ẋ(0) x(0)
−ẋ
α1 10 x
6 Examples
ẋ2
r
SJ v
ẏ 2
α2 10 y rv
−1 g −ẏ
ẏ(0) y(0)
+1 +1
Parameter Value
ẋ(0) 0.5
x(0) 1
ẏ(0) 0.86
y(0) 1
g 0.72
r 1
α1 0.34
α2 0.55
Fig. 6.32. Setup and parametrization of the ballistic trajectory problem on an Analog Paradigm
Model-1 analog computer
F⃗friction = −µ⃗r˙
It is further assumed that the particle moves in the x-y plane, which is per-
pendicular to the magnetic field. With ex , ey , and ez denoting the unit vectors
pointing into x-, y-, and z-direction, this results in
⃗ = Bz ez and
B (6.23)
⃗r˙ = ẋex + ẏey (6.24)
⃗r˙ × B
⃗ = ẏBz ex − ẋBz ey .
+1
ẋ(0)
+1
+1 β
< 0
α +1
> 0
x1
−1 x2
+1
ẏ(0) −1
−1
Fig. 6.34. Analog computer setup to simulate the path of a charged particle in a magnetic field
These equations can now be transformed into the analog computer program shown
in figure 6.34.
6.11 Rutherford-scattering 135
Fig. 6.35. Paths of a particle not captured within the mag- Fig. 6.36. Particle captured in the
netic field magnetic field
The two comparators in the middle of the figure control the area to which
the magnetic field is confined. x1 and x2 determine the left and right coordinate
enclosing the field (setting both to maximum yields a field that extends from
−1 ≤ x ≤ +1). The x- and y-components of the particle’s initial velocity can be
controlled by the two potentiometers labeled ẋ(0) and ẏ(0). The outputs x and y
are connected to an oscilloscope set to x, y-mode.
The results shown in figures 6.35 and 6.36 were obtained with the analog
computer set to repetitive operation with an OP-time of 10 ms and time scale
factors k0 = 103 set on all four integrators.
Figure 6.35 shows two typical paths of a particle which is deflected by the
magnetic field but not captured. The values for α and β were the same for both
pictures. Only x2 , the coordinate at which the magnetic field ceases to act on the
right side, has been increased in the second case. The particle in the left picture is
fast enough to escape the narrow magnetic field while the wider field in the right
picture causes the particle to be reflected.
Figure 6.36 shows the path of a particle with heavy friction. The friction is so
large that the particle cannot escape the area in which the magnetic field acts.
6.11 Rutherford-scattering
To run this program the analog computer is set to repetitive operation with an
operation time tOP = 30 ms. The integrator of the triangle wave generator must
be disconnected from the central timing control by patching its ModeIC-input to
+1 (only the third or fourth integrator of an INT4 module can be used for that
purpose), so that it will not be reset between two successive computing runs.
The picture shown in figure 6.39 was obtained with time-constants k0 = 100
on the four integrators in figure 6.37 and k0 = 1 in the triangle-wave generator.
The camera was set to ISO 100 with an exposure time of 8 seconds.
+1 +1
+1
2 2
x +y γ∗
10 2 = r∗
x2
ẍ ẋ
x √
x
y r∗3
ÿ ẏ
10
y2
y(0)
6.11 Rutherford-scattering
> 0
+1 −1 +1
−1
< 0
manual
> 0
ModeIC=+1
+1 y(0)
automatic
−1 < 0
Simulating (or determining) the movement of three (celestial) bodies with given
parameters, such as masses, gravitational constant, initial conditions for the bod-
ies’ coordinates and velocities, is known as the three-body problem.114 The first
one to tackle this problem was Sir Isaac Newton in the first book, section XI,
of his Principia, which was considered “the most valuable chapter that was ever
written on physical science” according to Sir George Biddell Airy.115
Although an analytical solution is possible in the simpler case of two bodies,
Joseph-Louis Lagrange was able to give some particular solutions for the much
more difficult three-body problem. Since these are beyond the scope of this sec-
tion, the reader is referred to the standard texts [Battin 1999, p. 365 et seq.] and
114 This is a special case of the n-body problem, which will not be investigated further here.
115 Cf. [Moulton 1923, p. 363].
6.12 Celestial mechanics 139
[Moulton 1923, p. 277 et seq.] for an analytical treatment of this problem and
some background information. However, using an analog computer, it is straight-
forward to investigate the three-body problem.
In this section a simple universe consisting of two suns with fixed positions ⃗r1 =
(x1 , y1 ) and ⃗r2 = (x2 , y2 ), masses m1 = m2 = 1, and a satellite of negligible mass
criss-crossing between these two celestial bodies will be simulated. This system
can be readily described by the following set of coupled differential equations:
r⃗2 − r⃗1 r⃗3 − r⃗1
m1 r⃗¨1 = Gm1 m2 + Gm1 m3
2 2 3 3
r⃗2 − r⃗1 r⃗3 2 − r⃗1 2
r⃗3 − r⃗2 r⃗1 − r⃗2
m2 r⃗¨2 = Gm2 m3 + Gm2 m1
2 2 3 3
r⃗3 − r⃗2 r⃗1 2 − r⃗2 2
r⃗1 − r⃗3 r⃗2 − r⃗3
m3 r⃗¨3 = Gm3 m1 + Gm3 m2 (6.29)
2 2 3 3
r⃗1 − r⃗3 r⃗2 2 − r⃗3 2
The vectors ⃗ri , 1 ≤ i ≤ 3 describe the positions of the different bodies with masses
mi while G is the gravitational constant.
Assuming that the two suns are stationary, only equation (6.29) has to be
implemented on the analog computer. Since the suns’ masses are equal to 1 the
satellite’s mass cancels out, yielding
G G
r⃗¨3 = (r⃗1 − r⃗3 ) + (r⃗2 − r⃗3 ). (6.30)
2 2 3 3
r⃗1 − r⃗3 r⃗2 − r⃗3 2
2
and splitting (6.30) into its x- and y-components yields the following two equations
which describe the satellite’s trajectory in Cartesian coordinates:
G G
ẍ3 = (x1 − x3 ) 3 + (x2 − x3 ) 3 and (6.33)
∆r13 ∆r23
G G
ÿ3 = −y3 3
+ 3
. (6.34)
∆r13 ∆r23
These equations can now easily be implemented on an analog computer. Figure
6.40 shows the partial computer setup yielding the distance terms (6.31) and
(6.32).116
116 The two square root units followed by a multiplier each could be replaced by two diode
function generators, if these are available.
140
6 Examples
x1 − x3
+1 G
−1 G
x3 √ 3
∆r13
...
y3
√ G
...
3
∆r23
x2 − x3
+1 −1
x1 − x3
G
3
∆r13 −ẋ3
x3
x2 − x3
G
3
∆r23 +1 −1
ÿ3 −ẏ3
y3
The simulation of a ball bouncing in a box described in the following section is not
only interesting in itself but also ideally suited for setting up a fascinating display
for exhibitions, schools, etc. Using high-speed integration and repetitive operation
of the analog computer, a flicker-free oscilloscope display of the ball’s path can
be easily obtained. This allows one to change the parameters of the simulation
manually and directly observe the effects of these changes.118
Fig. 6.42. Typical satellite trajectory for a small Fig. 6.43. Trajectory for a larger value of G
value of G
Figure 6.44 shows the basic idea of the simulation: The bounding box is as-
sumed as the rectangle [−1, 1] × [−1, 1] and the position of the ball within this box
is described by the coordinates (x, y). At the start of a simulation run the ball has
two initial conditions: an initial velocity v(0) (of which only the x-component will
be given) and an initial y-position y(0), set to 1.
The x- and y-components of the ball’s position are completely independent
so they can be generated by two separate sub-programs. The x-component of the
ball’s velocity is assumed to decrease linearly over time. The actual x-position of
the ball is derived by integrating over its velocity. Every time the ball hits the left
or right wall of the box, it changes its direction, i. e., it is reflected by the wall.
The y-component is that of a free-falling ball bouncing back elastically when
hitting the floor. Figure 6.45 shows those two variables over time as displayed
on an oscilloscope: Starting from the left, the computer is first in IC-mode. As
soon as the OP-mode starts, the ball begins to drop until it hits the floor while
the x-component increases (trace in the middle) with decreasing velocity until it
reaches the right wall, etc.
Figure 6.46 shows the computer setup yielding the x-component of the ball’s
position. The leftmost integrator generates the velocity component vx of the ball
which starts at +1 and decreases linearly controlled by the ∆v-potentiometer.119
It should be noted that this setup differs slightly from the triangle generator shown
in figure 5.15 in section 5.6 by the introduction of a second comparator with an
associated electronic switch. This yields a stable comparison value of ±1 for the
first comparator since its associated switch no longer yields ±1 at its output,
because it is driven by the leftmost integrator in this setup.
119 The diode at the output of the integrator is not really necessary, but it makes parameter
setup easier as it prevents a negative velocity at the cost of introducing a slight voltage drop
determined by its intrinsic forward threshold voltage.
6.13 Bouncing ball 143
y
1
@
@
@ v0 (x, y)
x
−1 1
−1
Fig. 6.44. Movement of the bouncing ball Fig. 6.45. x- and y-component of the bouncing
ball
> 0
+1
−1 −1
< 0 +1
0.51
> 0
0.19 k0 = 103
+1 ∆vx
v 10 x
k0 = 102
< 0
d
−1
10 k0 = 102 k0 = 102
−1 g 10 10 y
SJ
10 V
Fig. 6.47. Generating the y-component Fig. 6.48. Trace of the bouncing ball
to be zero. At the beginning of a simulation run, the ball begins a free falling
trajectory controlled by (6.35). When the ball hits the floor, it will rebound due
to elastic collision described by the term c(−y − 1) with a constant c controlling
the elasticity.
Figure 6.47 shows the corresponding computer setup. The leftmost integra-
tor integrates over the gravity g and takes the friction d into account. It yields
the negative y-component vy of the ball’s velocity. The integrator on the right
integrates over vy yielding the actual y-position.
The implementation of the elastic rebound is extremely simple and consists
of two diodes. The “active” element here is the 10 V Zener-diode which starts
conducting when the ball hits the floor of the box (ignoring the small bias voltage
of the diodes). The diode on the left makes sure that the Zener-diode won’t
conduct while the ball is within the box. Since the rebound effect is rather violent
the output of the two diodes connected in series is directly connected to the
summing junction input (SJ) of the first integrator. The time scale factors of the
integrators are both set to k0 = 102 .
With settings as denoted in the circuit diagrams and the computer set to
repetitive operation with an OP-time of 20 ms and short IC-time, a display such
as that shown in figure 6.48 can be easily obtained. Because the oscilloscope used
here had no blanking-input the return of the beam to the upper left corner is
faintly visible. If a blanking-input is available, it can be connected to a trigger
output of the analog computer’s control unit to avoid this artefact.
How could one not like zombie movies? It was about time that mathematicians –
Robert Smith? and his collaborators – shed some light on zombie attacks from a
6.14 Zombie apocalypse 145
Figure 6.49 shows the resulting qualitative (i. e., unscaled) program for equations
(6.36) and (6.37). A typical simulation result obtained with this setup is shown in
figure 6.50. The computer was run in repetitive mode with the time scale factors
of the integrators set to k0 = 103 . Additionally, all integrator inputs had a weight
of 10 further speeding up the simulation by another factor of 10. The OP-time was
set to 60 ms. The oscilloscope, relying on its built-in time-deflection, was explicitly
triggered by one of the trigger outputs of the CU to obtain a stable display.
The parameters used were derived experimentally by manually changing the
coefficients until an oscillatory behavior was obtained. The output shown corre-
sponds to h(0) = z(0) = 0.6, α = 0.365, β = 0.95, δ = 0.84 (very successful
zombies, indeed), γ = 0.44, and ζ = 0.09. As in most predator-prey-systems it
is quite difficult to find a parameter setting which gives oscillatory behavior with
stable minima and maxima amplitudes. Since neither species becomes extinct with
this particular parameter set there is plenty of scope for many Zombie movies in
years to come.
−βhz
+1 αh
h(0)
−h h
α
+1 β
−ζz z(0)
γ
−hz
δ
−z hz
z −γhz
δhz
Fig. 6.49. Analog computer program for equations (6.36) and (6.37)
ẋ = −(y + z),
ẏ = x + ay, and
+1
0.066
0.8
−1
x
10
0.3796
0.23
0.125
10
y
0.2
0.005
+1
10 z
10
0.5
ż = b + z(x − c)
ẋ = −0.8y − 2.3z
ẏ = 1.25x + a∗ y
ż = b∗ + 15z(x − c∗ )
with a∗ = 0.2, b∗ = 0.005, and c∗ = 0.3796. The resulting computer setup is shown
in figure 6.51 while figure 6.52 shows an x-y-plot of the attractor, photographed
with ISO 100 and a time scale factor k0 = 103 set on all three integrators.
A particularly beautiful picture can be obtained with a simple (static) 3d-
projection of the attractor. One input of the oscilloscope, which is set to x, y-mode,
is directly fed by x while the other input is fed from a summer yielding
where the sine/cosine terms are directly set by coefficient potentiometers as shown
in figure 6.54. Using these two coefficient potentiometers the angle of view is freely
148 6 Examples
Fig. 6.52. x-y-plot of the Rössler attractor Fig. 6.53. Projection of a Rössler attractor
sin(φ)
y
y∗
z
cos(φ)
adjustable. Figure 6.53 shows a typical projection of the attractor obtained with
this setup.
Another, even more famous, chaotic system is the intriguing Lorenz attractor
developed in 1963 by Edward Norton Lorenz as a simplified model for atmo-
spheric convection and first described in [Lorenz 1963].123 Although this seminal
work was performed on a digital computer, a Royal McBee LGP-30, the Lorenz
attractor is, of course, ideally suited to being implemented on an analog computer
as will be shown below.
This dynamic chaotic attractor is described by the following system of three
coupled differential equations:
ẋ = σ(y − x)
ẏ = x(ρ − z) − y
123 More information about the construction of chaotic attractors in general as well as this
attractor in particular may be found in [Kuehn 2015, p. 468 et seq.].
6.17 Another Lorenz attractor 149
ż = xy − βz
The parameters are σ = 10, β = 83 , and ρ = 28. Obviously, this set of DEQs must
be scaled in order to be implemented on an analog computer. The resulting scaled
equations look like this:
Z
x = (1.8y − x) dt + C
s = 1 − 2.678z
Z
y = (1.5556xs − 0.1y) dt
Z
z = (1.5xy − 0.2667z) dt.
C denotes the initial condition of the integrator yielding x and is not critical.
Taking into account that every summer and integrator of an analog computer
performs an implicit change of sign and further noting that xy = −x(−y), these
equations can be further simplified saving two inverters in the resulting computer
setup:
Z
−x = − (1.8y − x) dt + C
Z
−z = − (1.5xy − 0.2667z) dt
s = − (1 − 2.68z)
r = −xs
Z
−y = − (1.536r − 0.1y) dt.
In 1984 Edward Norton Lorenz described another chaotic attractor that did
not become as famous as the one described above but is nevertheless interesting to
124 Although not fully consistent with the scaling of the equations, a value of 0.125 instead of
0.2667 at the feedback potentiometer of the second integrator has shown to yield great results.
150
+1
6 Examples
−x
10 0.15
0.268 0.1536
10
xy
10 10 −y
r
−y −z s
0.18 +1
y 0.2667 0.1
Fig. 6.55. Computer setup for the Lorenz attractor (the value of 0.2667 in the feedback of the second integrator can be decreased to about 0.125 to
yield a nice result)
6.18 Chua attractor 151
Fig. 6.56. Classic display of the Lorenz attrac- Fig. 6.57. Different angle of view on the
tor Lorenz attractor
A much more complex chaotic system, the Chua oscillator, is described in this
section. This system was discovered in 1983 by Leon Ong Chua and is a clas-
sic example of an electronic circuit exhibiting chaotic behavior. This oscillator
generates a unique and particularly beautiful attractor called the Double Scroll
attractor. At its heart is a (hypothetical) nonlinear device called a Chua diode.
The mathematical description of this particular oscillator is based on three
coupled differential equations of the form
ẋ = c1 (y − x − f (x)), (6.38)
−1
0.25
0.45
x
xy
0.45
−1
0.333
y2
y
0.8
0.2
xz
0.8
z
0.2
z2
where f (x) describes the behavior of the Chua diode and is defined as
m0 − m1
f (x) = m1 x + (|x + 1| − |x − 1|) . (6.41)
2
The standard values for the parameters are
c1 = 15.6,
c2 = 1,
c3 = 28,
m0 = −1.143, and
m1 = −0.714.
Scaling this system of coupled differential equations is not simple due to its
pronounced non-linear behavior. The ranges of the various terms were determined
by computing example solutions on a digital computer – a technique that was al-
ready used in the 1960s. These results were then used to guide the scaling process.
6.18 Chua attractor 153
Fig. 6.59. Typical behavior of the Lorenz-84 system, the three screenshots show x vs. y, x vs. z,
and y vs. z
To simplify the overall process, equations (6.38), (6.39), (6.40), and (6.41) were
split into individual terms resulting in the following set of scaled equations.
(6.38) is replaced by
x0 = 0.1 (6.42)
x1 = −10(x + f (x)) (6.43)
1
x2 = y + x1 (6.44)
2Z
x = 3.12 x2 dt + x0 (6.45)
−1 x
0.1
10
0.312
0.5 −x
10 10
f (x) y
Fig. 6.60. Partial computer setup for equations (6.42), (6.43), (6.44), and (6.45)
0.125 0.35
0.2 10
0.4 10 10
10
−x
0.125
y
Fig. 6.61. Partial computer setup for equations (6.46), (6.47), (6.48), and (6.49)
The implementation of the central element, the Chua diode, is quite costly as
the implementation of each of the absolute value functions requires at least two
diodes and an open amplifier. f (x) is broken down and scaled like this:
126 If the attractor does not appear, which can be caused by slightly uncalibrated integrators,
increasing the value of the potentiometer set to 0.312 in figure 6.60 typically does the trick.
6.19 Nonlinear chaos 155
SJ
0.7143
x
+1
0.2857
SJ
0.3003
f (x)
Fig. 6.62. Partial computer setup for equations (6.50), (6.51), (6.52), and (6.53)
A remarkable feature is that the behavior of this system is controlled by just one
parameter, λ. The scaled computer program is shown in figure 6.65. If no absolute
156 6 Examples
Fig. 6.63. Setup of the Chua oscillator on an Fig. 6.64. Phase space display of the Double
Analog Paradigm Model-1 analog computer pro- Scroll attractor
totype
0.125
10
−ẍ ẋ −x
−1 0.5 0.8 |x|
value function is readily available on the analog computer being used, one can be
setup as shown in figure 5.20.
An example of a phase space plot based on −ẍ and ẋ with λ ≈ 0.62 is shown
in figure 6.66.
One of the prettiest chaotic attractors by far is the Aizawa attractor,127 which is
described by the following three coupled differential equations
ẋ = x(z − β) − δy,
128 The original system has γ = 0.6, but 0.65 typically yields a better result.
158
−1
0.133 0.325
6 Examples
+1
0.466
10 x x2 10 z
0.25
10
−xz ∗ 10 10
z3
0.45
z2
x3
−zx3
0.866∗
0.092∗∗
−x
−z
0.28 0.8
0.2625
+1
y 10
0.25
10
z∗
−y
−yz ∗
−1
0.4
0.675
−x
−y y
10
0.9 0.33
−1
Scaling this system is surprisingly difficult but finally yields the following set
of equations, which can be easily converted to the analog computer program shown
in figure 6.69:
Z
x = y dt
Z
y = (−x + 9yz) dt
Z
1
z=− − + 6.75y 2 dt
3
Figure 6.70 shows the mesmerizing phase space plot of the Nosé-Hoover oscil-
lator.
160 6 Examples
This section shows the implementation of the SQM model (cf. [Sprott 2010,
p. 68 et seq.]), an example of a simple three-dimensional chaotic flow with a
quadratic nonlinearity at its center which is described by the following three cou-
pled differential equations:
ẋ = −z
ẏ = −x2 − y
ż = α + αx + y
Here, α = 1.7 and the initial conditions are x(0) = 1, y(0) = −0.8, and z(0) = 0. To
scale the system, three scale factors λx = λz = 14 and λy = 16 are introduced which
in turn yield after collecting all resulting factors the following scaled system:
ẋ = −z (6.54)
2
ẏ = −2.666x − y (6.55)
α
ż = + αx + 0.15y. (6.56)
4
Thanks to the constant term α4 , the initial conditions mentioned in the original
system can be safely ignored as it will enter its chaotic oscillation quickly even
without any explicit initial conditions.
The analog computer setup can be derived directly from the scaled equations
(6.54), (6.55) and (6.56) as shown in figure 6.71. This program is ideally suited
to be implemented on THE ANALOG THING as shown in figure 6.72. Figure
6.73 shows a typical phase space plot of x vs. y which was caputured using a
6.23 The Duffing oscillator 161
+1
α
4
0.266
−x x2 10
y 10 −z
0.15
10
α
10
x
Fig. 6.72. Implementation of the program shown Fig. 6.73. xy phase space plot of the SQM sys-
in figure 6.71 tem
USB-soundcard129 with stereo line in and the software Oscilloppoi 130 running on
a Mac.
A Duffing equation describes any oscillator featuring a cubic stiffness term, i. e.,
a nonlinear elasiticity, such as
ẍ + δ ẋ + αx + βx3 = 0.
129 It should be noted that most soundcards have AC coupled inputs, so any DC component
present in a signal is removed. Typically this approach is only viable with the analog computer
running in repetitive mode.
130 See https://anikikobo.com/software/oscilloppoi/index_en.html.
162 6 Examples
ε γ
f (t)
+1
+1 ω0 2 × 10 V
Thorough theoretical treatments of this oscillator have been done and may be
found in [Korsch et al. 2008], [Höhler 1988], and [Chang 2017]. Apart from its
interesting mathematical properties it is a nice dynamic system which can be
implemented on an analog computer without complicated scaling. It invites to
playing with its parameters to achieve all kinds of nice phase space plots.
Figure 6.74 shows the implementation of the forcing function f (t) = γ cos(ω0 t).
To achieve a minimum of harmonic distortion, the two amplitude limiting Zener
diodes are connected to an integrator input with weight 1 instead of the sum-
ming junction. ε introduces a tiny positive feedback signal to avoid a decreasing
amplitude, while γ controls the amplitude of the output signal.
Figure 6.75 shows the straight-forward implementation of the Duffing oscil-
lator. β was scaled down by a factor of 10 1
to allow for values of β up to 10. A
good initial parameter set to start with experiments is defined by α = 1, β = 5,
γ = 1, δ = 0.02, ω0 = 1, and ε as small as possible.
Varying the parameters manually shows the complex behavior of this forced
oscillator which even shows chaotic characteristics in certain cases. The effect of
ω0 is quite distinct. Two typical x, −ẋ phase space plots are shown in figure 6.76.
To get a wider range for γ, the output f (t) may also be connected to an integrator
input with weight 10 on the Duffing oscillator sub-circuit.
6.24 Rotating spiral 163
−ẋ x
f (t) x3
10
β
δ α 10
−δ ẋ
−αx
β 3
10 x
Fig. 6.76. Two phase space plots showing the chaotic behavior of the Duffing oscillator
+1 y
k0 = 103 . . . 104 k0 = 103 . . . 104
10
10
+1
x
k0 = 102
τ
−1
+1
k0 = 1 k0 = 1
ModeIC ModeIC
+1
sin(φ) and cos(φ) are replaced by a simple harmonic oscillator yielding a sin(ωt)
and cos(ωt) signal pair.131
Figure 6.77 shows the overall computer setup. At the top is a harmonic oscil-
lator running at high speed with a time scale factor of k0 = 103 or, even better,
k0 = 104 . The potentiometer α determines the amount of damping and thus the
shape of the spiral. A three-dimensional spiral requires a third time-dependent
variable τ , which is generated by the integrator in the middle of the schematic set
to a time scale factor of k0 = 102 . These three integrators run in repetitive mode
with an operating time of 20 ms.
The second oscillator at the bottom of figure 6.77 generates an undamped
sine/cosine signal pair and is run in continuous mode, i. e., unaffected by the
Fig. 6.78. Snapshots of rotating spirals with different time scale factors
Figure 6.78 shows two typical snapshots of the rotating spiral. In both cases the
rotation had been stopped by placing the two oscillator integrators at the bottom
of figure 6.77 into HALT-mode by means of their ModeOP-inputs (not shown in
the setup). The picture on the left was taken with the time scale factors of the two
topmost integrators set to k0 = 103 while the right picture was obtained with k0 =
104 . If the integrators used do not offer this time scale factor directly, additional
input resistors yielding a sufficiently large input weight can be connected to their
SJ inputs. Input weights of up to 100 may be typically achieved by this technique.
The first example in the beautiful book [Havil 2019, p. 1 et seq.] is the Euler
spiral, which is a clothoid, a curve with curvature depending linearly on its arc
length. Clothoids play a significant role in road and railway track construction.
Imagine driving along a straight road approaching a curve. If the straight part of
the road was directly connected to the arc of a circular curve, one would have to
instantaneously change from a steering angle of zero to a non-zero steering angle,
which is not just impracticable at low speeds but outright dangerous at higher
velocities. Therefore, a safe road curve has a curvature that starts at zero, gently
rises to its maximum and then falls back to zero again when the next straight
road segment starts. These curves, which are encountered on many highways and
railway tracks, can be implemented using clothoids.
Generally, a curve parameterized by some functions x(t), y(t) has a slope
dy ẏ
m= = , (6.57)
dx ẋ
an arc length
Zt1 p
l= ẋ2 + ẏ 2 dt, (6.58)
t0
and a curvature
ẋÿ − ẏẍ
κ= q . (6.59)
3
(ẋ2 + ẏ 2 )
Using the general parameterization
ZT ZT
x(t) = cos (f (t)) dt and y(t) = sin (f (t)) dt
0 0
t2
The Euler spiral satisfies κ = t, i. e., f˙ = t yielding f (t) = 2 from which its
parameterization
ZT ZT
t2 t2
x(t) = cos dt and y(t) = sin dt
2 2
0 0
6.25 Generating an Euler spiral 167
+1
P2
P1 x
2
+1 t
− cos 2
10
2
t
10 − sin 2
P3
+1 y
P5 +1 P4
−1
Fig. 6.80. Setup for generating an Euler spiral Fig. 6.81. Typical output of the Euler spiral
on THE ANALOG THING program
From humble beginnings in the early 20th century the behavior of neurons has
been described by increasingly realistic mathematical models. The very first of
these models, integrate-and-fire, is due to Louis Lapicque, who developed it in
1907. An improved model was developed in the early 1960s by Richard FitzHugh
and J. Nagumo and is described by the two coupled differential equations
v3
v̇ = v − − w + Iext and
3
τ ω̇ = v + a + −bw.
6.26 Hindmarsh-Rose model 169
ÿ + µ y 2 − 1 ẏ + y = 0.135
A much more recent model was suggested Hindmarsh and Rose136 and con-
sists of three coupled differential equations
134 In contrast to a harmonic oscillator which is typically based on an amplifier with suitable
feedback, running in resonance mode, a relaxation oscillator switches abruptly between charge
and discharge mode and thus yields non-harmonic output signals.
135 See section 6.6.
136 See [Hindmarsh et al. 1982] and [Hindmarsh et al. 1984].
170 6 Examples
Fig. 6.82. Numerical simulation of the three coupled differential equations (6.60), (6.61), and
(6.62)
b∗
10 = 0.6
x2
a∗
10 = 0.4
−x3
+1 d∗
10 = 0.133 −1
10
10 10 k0 = 103
−x
Iext y
10
(= 1) −1
k0 = 103
0.75 c∗ = 0.066
x
+1 100rs = 0.4
k0 = 10
−z
+1
100rsx∗r = 0.32
100r = 0.1
Fig. 6.83. Scaled analog computer setup for the Hindmarsh-Rose model
6.27 Simulating the flight of a glider 171
Figure 6.84 shows a typical result obtained on a digital oscilloscope with this
analog computer circuit and Iext = 1.
In the early years of the 20th century, the English engineer and polymath Freder-
ick William Lanchester137 discovered the phenomenon of phugoid oscillation,
which describes a peculiar motion of an aircraft, in which it follows a sinusio-
dal flight path with respect to height over ground, pitching up and down and
thus climbing and descending repeatedly instead of remaining in level flight.138
A detailed description of this phenomenon can be found in [Simanca et al. 2002,
p. 3:1 et seq.] on which the following derivation is based.
Figure 6.85139 shows the basic glider aircraft considered here. φ is the angle
between the centerline of the glider and the horizontal axis, while drag and lift are
proportional to the square of the glider’s velocity v. Introducing a drag coefficient
R, the drag will be Rv 2 while lift will be considered to be equal to v 2 in the
following.
Summing up the forces acting on the airplane yields
2
lift ∝ v
2
mg cos(φ)
drag ∝ v
When the angle of attack becomes negative, the sine term will also become nega-
tive, yielding a positive mg-term on the right hand side, accelerating the airplane.
A positive angle of attack will accordingly decelerate the airplane. Dividing both
sides by m and changing R to R∗ to absorb the 1/m term and setting the gravi-
tational acceleration g := 1 to simplify things even further gives
v̇ = − sin(φ) − R∗ v 2 . (6.64)
mv 2
Fz = .
r
Since
v
φ̇ =
r
this can be rewritten as
Fz = mv φ̇,
which must be equal to the sum of lift Lv 2 with some lift coefficient L and the
downward force:
mv φ̇ = Lv 2 − mg cos(φ)
Dividing by m, introducing a scaled lift coefficient L∗ as before, and solving for
˙
varphi yields
cos(φ)
φ̇ = L∗ v − . (6.65)
v
The analog computer implementation of this problem is based on equations (6.64)
and (6.65).
To display the flightpath of this glider, the required x, y-coordinate tuple can
be generated by integrating over
ẋ = v cos(φ) and
ẏ = v sin(φ).
6.27 Simulating the flight of a glider 173
−1
cos(φ(0))
φ̇ SJ cos(φ)
0.5 10
−φ̇ 10
0.5 10 sin(φ)
− sin(φ)
− sin(φ(0))
+1
To make a (very long) story short, scaling this problem is pretty involved and may
be left to the interested reader.141
This problem was solved on a historic Telefunken RA 770 analog computer.142
Since this machine uses quarter square multipliers, all multipliers need their in-
put values with both positive and negative signs unlike modern multipliers, thus
cluttering the schematic a bit. Some of this machine’s multipliers also require a
dedicated buffer amplifier with a feedback of 10 instead of 1, which is pretty un-
usual from today’s perspective. Accordingly, the following schematics show some
technical detail which was typical for analog computer setups in the 1960s.
The first subcircuit is the generation of ± sin(φ) and cos(φ) based on φ̇ as
input shown in figure 6.86. There is nothing special about this circuit. Since the
simulation will typically be run in repetitive mode with rather high values of k0 ,
no amplitude stabilization is required here.
Figure 6.87 shows the partial computer setup yielding ±v and v 2 . Note the
input weights of 10 in this subcircuit. These were determined by manual scaling
and yield a greater sensitivity of the glider to R∗ and the gravitational pull.
140 Since sin(φ) and cos(φ) are derived based on φ̇ instead of φ since φ is not easily restricted
to a fixed interval, setting φ(0) requires the initial conditions of two integrators to be set to
cos(φ(0)) and sin(φ(0)), respectively.
141 This is the revenge for all the books the author read during his university days, which
usually left unpleasant tasks like this to the reader. . .
142 This particular model is without much debate the top model of Telefunken’s analog
computer family and was introduced in 1966 and built until 1975. It was often part of a
hybrid computer installation. Not many of these machines are known to have survived.
174 6 Examples
+1
v(0)
v
− sin(φ) −v
0.2 10
1 10
v2
Fig. 6.87. Computing ±v and v 2 , the scale factor 1 on the lower input of the integrator should be
1
2
if scaled “by the book”, but setting this parameter to 1 yields more sensitivity with respect to R
−v 1
v
φ̇
+1 x(0) −1
v
−v
cos(φ) 0.5 x
0.5 y
sin(φ)
Parameter Value
− sin(φ(0)) 0.707
cos(φ(0)) 0.707
v(0) 0.79
R∗ 0.04
−g 0.107
Figure 6.90 shows the actual setup on a Telefunken RA 770.143 All in all this
simulation requires seven summers, five integrators, 12 potentiometers,144 one
“free” potentiometer, five multipliers, and one divider.145
It is quite fun to play with the two parameters v(0) and R. Figure 6.91 shows
a typical simulation result. The simulation was run at high speed in repetitive
mode so that a flicker free oscilloscope display could be achieved. This particular
result was obtained with the parameters as shown in table 6.2.
The glider starts at y = 0 at the right and is thrown at a rather steep angle
to the right. As v is pretty small, it goes into a dive, turns its direction to the left
143 The little black boxes dangling from the patch panel are just distributors with six inter-
connected jacks. These are often necessary as the special Telefunken patch cables cannot be
stacked together like banana plugs.
144 Two of those could be eliminated as they have been set to 1 during the scaling process.
145 Please note that the inverters required by the quarter square multipliers of the RA 770
have not been included in that list since these are only required due to the structure of this
particular analog computer.
176 6 Examples
and gains velocity and thus lift which yields height. It makes a tiny loop before it
slowly wiggles down.
The program described in this section simulates the flow of air around a special
type of airfoil, a Joukowsky airfoil.146 The basic idea is to simulate the flow of
air around an infinitely long rotating cylinder which is then transformed together
with its surrounding airflow into the shape of an airfoil by a suitable conformal
mapping.147
Spinning objects moving through air, such as golf balls or rotating cylinders,
experience a force which is due to the effect of their rotation in conjunction with
the surrounding medium. This is known as Magnus effect after Heinrich Gus-
tav Magnus, a German physicist who gave the first comprehensible explanation
of this phenomenon. The airflow around a rotating cylinder will be simulated
below using the following assumptions:
v̇ = 0 and Ȧ = 0
This airflow is a potential flow and its velocity field can thus be described as the
gradient of a complex function f (z) called velocity potential. The function
r2 e−iφ
Γ
f (z) = v(0) zeiφ + −i log(z)
z 2π
will be used in this example. Here r denotes the radius of the cylinder around
which the air flows. v(0) is the initial velocity of the air flow, φ the angle of
attack, and Γ the circulatory component.
First a circuit to generate the circumference of the rotating cylinder, i. e., a
circle, is needed. This is implemented as always by solving the differential equation
ÿ = −y as shown in figure 4.11 in section 4.2. The radius is set to about 0.4 by
means of the parameter a∗ , which controls the negative feedback, and the initial
condition a of the first integrator. Both of these must match to avoid artefacts at
the start of a simulation run. As before, the coefficient α provides a very small
positive feedback. It can be omitted if the radius remains constant during a pro-
longed run. If required, it should be set to about 0.005, otherwise the sine/cosine
signals will be distorted. The time scale factors of the two integrators should be
set at least to k0 = 103 .
The program for the complex velocity potential shown in figure 6.92 is much
more complicated. This circuit expects three parameters/input values:
– The angle of attack, tan(φ), which can be set manually by the free poten-
tiometer shown in the upper left (if no free potentiometer is available, this
parameter can be restricted to either positive or negative values only).
– A value τ varying linearly from −1 to +1 during one computer run. This
represents the x-component of an air particle moving from left to right (the
circuit actually requires −τ ).
– Finally, the input denoted y(0) represents the distance of the flow line from
the lower/upper surface of the airfoil. This value can be set either manually
by a potentiometer connected to +1 or −1 or can be generated automatically,
as shown below.
The small capacitors (ca. 47 pF to 68 pF) used in conjunction with the open
amplifiers are required to stabilize the circuit and prevent it from oscillating. The
178 6 Examples
y(0)
0.315
tan(φ)
0.46 SJ
SJ
yflow
−τ
0.245
−1
0.952
0.8
1
2 log(100x)
0.8
+1
−τ
−1 τ
k0 = 102
Putting all of this together and using a display with two independent (x, y)-
inputs the airflow around a rotating cylinder can be visualized as shown in figure
6.94.
Using a conformal or angle-preserving mapping it is now possible to transform
the cylinder cross section into the shape of an airfoil. Applying the same mapping
to the flow lines around the cylinder yields the corresponding flow lines around
that particular airfoil.
Conformal mappings are functions generally defined on the complex plane
f : C → C which locally preserve angles. One particularly demonstrative function
which will be used here is the Kutta-Joukowsky transform
1
z=ζ+
ζ
developed by Martin Wilhelm Kutta and Nikolai Joukowsky. This function
maps a (unit) circle into the shape of a special type of airfoil. A noteworthy
property of this class of airfoils is a cusp at their trailing edge.
The actual conformal mapping as implemented is based on the transform
η
f (z) = (z − z∗ ) + ,
z − z∗
which is split into its real and imaginary parts yielding
η 2 (x(t) − x∗ )
u (x(t), y(t)) = (x(t) − x∗ ) + and
(x(t) − x∗ )2 + (y(t) − y∗ )2
η 2 (y(t) − y∗ )
v (x(t), y(t)) = (y(t) − y∗ ) − .
(x(t) − x∗ )2 + (y(t) − y∗ )2
180 6 Examples
The parameters x∗ and y∗ define the shape of the resulting airfoil. Good values
to start exploration with are x∗ = 0.045 and y∗ = 0.052.148 Figure 6.95 shows the
computer setup for the conformal mapping.
The two open amplifiers used to implement the required division operations
should have small capacitors connected between their respective outputs and sum-
ming junctions, as noted in section 5.2.2 and in the glider example before.
Since the conformal mapping circuit is required twice – to transform the circle
into a Joukowsky airfoil and to transform the airflow around the cylinder into
the airflow around that airfoil – the inputs x(t) and y(t) of the circuit shown
in figure 6.95 are rapidly switched between the outputs of the circuit generating
r sin(ωt) and r cos(ωt) and the circuit yielding the coordinates of an air particle
flowing around the hypothetical cylinder.
This can be implemented easily by means of two electronic switches as shown
in figure 6.96. The switches are controlled in such a way that they change position
every other repetitive cycle of the computer. This requires a toggle flip-flop driven
by the operate control line of the computer.
The overall computer setup is quite complex and requires a large complement
of computing elements, the majority of which are summers and multipliers. Figure
6.97 shows the resulting simulated airflow around an airfoil.
This example deals with heat flow in a cable – a problem that is described by
a partial differential equation, i. e., a differential equation containing differentials
with respect to more than one variable. Such problems are frequently found in
physics, engineering, and other areas and cannot easily be solved directly by an
analog-electronic analog computer, because such a machine only allows integration
with respect to (machine) time.
A straightforward approach to treat partial differential equations on an analog
computer is to approximate the differentials which are not with respect to time
as differential quotients, thus discretizing the underlying space. Depending on the
required resolution this approach will require many computing elements, as can
be seen below.149
148 To reliably set values as small as these it is often beneficial to use two coefficient poten-
tiometers in series, the first set to a value like 0.1 for some prescaling.
149 More general information on this topic can be found in [Bryant et al. 1962, p. 37 et seq.]
and [Gilliland 1967, p. 14-1 et seq.] as well as in other standard texts. The following example
is based on [Telefunken 1963] and [Giloi et al. 1963, p. 264 et seq.]. A vastly different approach
is described in [Albrecht 1968] where linear partial differential equations are transformed into
ordinary differential equations by means of a Laplace transform (see appendix A). Section 7.7
−(x − x∗ )
+1 x∗
η2 xdisplay
x 0.264 SJ
0.5
y 0.255 SJ
0.5
η2 ydisplay
−1 y∗
−(y − y∗ )
sin(ωt)
y
yflow
cos(ωt)
x
τ
control
ri
rc
with k denoting the thermal conductivity of the material. ρ is its density, and cp
is the specific heat capacity.
The temperature T within the material is a function of time t and a location
vector ⃗x, i. e., T (⃗x, t). The arguments of the function T are omitted as always to
avoid unnecessary clutter in the equations.
∇2 is the Laplace operator, which is often also denoted by ∆. It is
T0 = const. if r ≤ rc
T (r, 0) = (6.69)
0 else
Thus, the T (r, t) are replaced by a finite number of Ti (t) by approximating the
differentials by differential quotients with respect to r yielding a number of coupled
ordinary differential equations which can then be readily solved by an analog
computer. This is based on the following approximations for first and second
derivatives
∂f (x, t) fi+1 (t) − fi−1 (t)
≈ and (6.70)
∂x xi
2∆x
∂ 2 f (x, t) fi+1 (t) − 2fi (t) + fi−1 (t)
≈ . (6.71)
∂x2 xi (∆x)2
Applying (6.70) and (6.71) to (6.68) yields the following general expression
Ti+1 − 2Ti + Ti−1 1 Ti+1 − Ti−1
Ṫi = a + , (6.72)
(∆r)2 rc + i∆r 2∆r
solves this problem and yields the following set of coupled ODEs:
T0 := 1
1 5 3 5 3
Ṫ1 = T2 − 2T1 + T0 = T2 − T1 + T0
2 4 4 8 8
1 7 5 7 5
Ṫ2 = T3 − 2T2 + T1 = T3 − T2 + T1
2 6 6 12 12
6.29 Heat transfer 185
T0 3
8
−T1
5
12
T2 5
8
7
16 −T3 7
12
1 9 7 9 7
Ṫ3 = T4 − 2T3 + T2 = T4 − T3 + T2
2 8 8 16 16
T4 := 0
A clever trick often used in cases like this is to invert the signs of every other
equation thus saving inverters by taking the implicit sign inversion that every
integrator causes into account. Since the rough discretization shown above yields
only three coupled ODEs, the second one must be rewritten as
7 5
−Ṫ2 = − T3 + T2 − T1 . (6.73)
12 12
The resulting program is shown in figure 6.99 – due to the deliberate sign
reversal in (6.73) no additional inverters are required. In general, n − 1 integrators
are needed to solve such a partial differential equation in which the derivatives
with respect to the variable other than time are discretized by splitting them into
n slices. In order to minimize the error caused by this discretization the number
of slices should be as large as possible; three integrators, as in this example, is
typically not enough to obtain reasonable solutions.
186 6 Examples
Using the same basic idea of discretizing a continuous space into a discrete grid
of cells as in the previous example it is also possible to solve higher dimensional
partial differential equations such as the two-dimensional heat equation151
2
∂ u ∂2u
u̇ = α∇2 u = α + . (6.74)
∂x2 ∂y 2
describing a single node in the grid. qi,j represents an optional heat source or sink
at the node ui,j . In the following example heat is only applied to or removed from
node u0,0 , i. e., qi,j = 0 ∀i, j ̸= 0.
Typically it is desirable to have as many grid nodes as possible in a simulation
to minimize the error caused by the above discretization process. Some classic ana-
log computer installations, especially those in the chemical industry which were
used for the design of fractionating columns, often featured many hundred integra-
tors and sometimes in excess of a thousand coefficient potentiometers. Whenever
possible, inherent symmetries of a problem should be exploited in order to conserve
computing elements.
In the following case a 8 × 8 cm2 plate, perfectly insulated on its top and
bottom surfaces, is considered. The plate’s edges are held at a fixed temperature T .
Quadratic square plates like this exhibit symmetries not only along their x- and y-
axes but also with respect to their diagonals as shown in figure 6.100. Accordingly,
it is sufficient to take only one octant of the plate into account in the simulation,
thus saving a considerable number of computing elements. Nevertheless, this comes
at the cost that additional boundary conditions are required.
If only an octant instead of a quadrant of the plate as shown in figure 6.100
is implemented as a grid of computing nodes, the nodes along the diagonal and
those along the x-axis require some attention as they have to be connected to their
“mirror” neighbor nodes which replace the missing neighbor nodes in the adjacent
octants. The nodes along the vertical right hand side of the octant are held at the
fixed temperature T . The node in the center of the plate is denoted by u0,0 and
is the only node that allows heat to be applied or extracted.
According to equation (6.75), every integrator in the grid yielding ui,j requires
five inputs: The outputs of its direct neighbors ui−1,j , ui+1,j , ui,j−1 , and ui,j+1
151 The author is deeply indebted to Dr. Chris Giles, who not only suggested this problem
and that of appendix B, but also did the majority of the implementation.
6.31 Systems of linear equations 187
equipotential
nodes octant to
lines of be solved
symmetry
plate
as well as its own inverted output yielding the −4ui,j term. Node u0,0 requires an
additional sixth input for q0,0 .
The various paralleled inputs of the elements along the diagonal should be
clustered into single inputs with weight 10 and a coefficient potentiometer suitably
set in the actual computer setup.
Figure 6.102 shows two typical results obtained by this setup. The input q0,0
has been subject to a step impulse at the start of the computer run. The picture
on the left shows this impulse in the top trace followed by the outputs of the nodes
u0,0 , u1,0 , and u2,0 . The picture on the right only shows the outputs of the nodes
along the diagonal, i. e., u0,0 , u1,1 , u2,2 , and u2,2 .152
Figure 6.103 again shows the impulse fed into node u0,0 and the outputs of
the following three nodes along the x-axis but this time with a fixed boundary
temperature T = 1. It can be clearly seen that the nodes are “heated” by the
boundary while the heat spike q0,0 accelerated the heating initially.
It may seem bizarre to use analog computers, which are usually associated with the
solution of differential equations, to solve sets of linear equations. Such equations
152 Note that the vertical sensitivity has been increased by a factor 2 with respect to the right
pictures in order to obtain a reasonable output for the grid nodes further along the diagonal.
188 6 Examples
+1 T
10
0.4
10 10
0.4 0.4
10 10 10
q0,0
10 10 10 10
Fig. 6.103. Results along the x-axis with non-zero boundary temperature T
are typically solved numerically using an algorithmic approach such as the Gauss-
Seidel or Gauss-Jordan methods. Nevertheless, recent developments in artificial
intelligence and other areas have spurred the idea of using an analog computer as
a linear algebra accelerator for solving systems of linear equations.153
In the following A, ⃗b and ⃗x are defined as follows:154
a11 a12 . . . a1n b1 x1
.. .. . .. .
.. ∈ R . .
A= . .
n×n
, ⃗b = .. ∈ R , ⃗x = .. ∈ Rn .
n
It is assumed that the matrix A is non-singular. The task at hand is now to solve
a system of linear equations such as
A⃗x = ⃗b (6.76)
153 Cf. [Huang et al. 2017]. The work presented in this section has been done in collaboration
with Dirk Killat, see [Ulmann et al. 2019].
154 A ∈ Cn×n , ⃗b ∈ Cn , and ⃗
x ∈ Cn are not ruled out but have to be split into their respective
real and imaginary parts in order to apply the methods described here.
190 6 Examples
−(a12 x2 − b1 )
−1 b1 x1
1
a11
a12
−1 b2
1
a22
x2
a21
−(a21 x1 − b2 )
Fig. 6.104. Unsuitable direct approach of solving a system of linear equations on an analog com-
puter
b1 − a12 x2
x1 = and
a11
b2 − a21 x1
x2 = ,
a22
which can be readily transformed into the analog computer setup shown in figure
6.104.
Although this direct approach looks elegant, it doesn’t work! First, the coef-
ficients 1/aii are hard to implement as an analog computer is always restricted to
values in the interval [−1; 1]. Second, the main problem is that the setup shown
contains algebraic loops, which are loops consisting of an even number of sum-
mers. Since every summer performs an implicit sign inversion an even number
of such elements will yield the same sign at its output as at its input, resulting
in a positive feedback loop, which will result in an inherently unstable, typically
oscillating, circuit.
Accordingly, another approach, better suited to an analog computer, is re-
quired. The basic idea is to transform a system of linear equations such as (6.76)
into a system of coupled differential equations. The solution vector ⃗x is then ini-
tialized with some initial value and an error vector ⃗ε is computed, which is then
used to correct this initial guess of ⃗x:
A⃗x = ⃗ε and
⃗x˙ = −⃗ε
6.31 Systems of linear equations 191
or, in a component-wise-fashion
n
X
−ẋi = aij xj − bi , 1 ≤ i ≤ n (6.79)
j=1
Although this approach looks elegant it still has a major drawback in that it
often does not converge. Basically, the underlying system of coupled differential
equations is stable when ẋi = 0 ∀ i. This requires A to be symmetric and positive
definite, i. e., all roots of the characteristic equation
A − λI = 0 (6.80)
with I denoting the n × n unit matrix must be greater than zero. Unfortunately,
this is often not the case, making this direct approach unsuitable for most real
applications.
Figure 6.106 shows the non-convergent behavior of this simple indirect ap-
proach applied to the following system of linear equations:
0.8 0.5 0.3 x1 0.8
0.1 0.6 0.8 x2 = 0.7 (6.81)
0.2 0.9 0.4 x3 0.3
155 More information can be found in [Fifer 1961, p. 842 et seq.], [Adler 1968, p. 281 et
seq.], and [Kovach et al. 1962].
156 A rather technical proof of this may be found in [Fifer 1961, p. 842 et seq.].
192 6 Examples
−1 b1
a11
x1
a12
a13
−1 b2
a21
x2
a22
a23
−1 b3
a31
x3
a32
a33
Fig. 6.105. Indirect approach to solving a system of linear equations on an analog computer
direct approach might convert (6.76) into (6.82) by means of a preliminary step
performed on a digital computer.
Nevertheless, a purely analog implementation of this approach can be achieved
by computing the error resulting from an initial setting of ⃗x by
AT A⃗x − AT ⃗b = ζ⃗
and setting
⃗x˙ = −ζ⃗
[V]
− +10
x1
|10 t [ms]
x2
− −10 x3
Fig. 6.106. Divergent behavior of the computer setup shown in figure 6.105
6.32 Human-in-the-loop
−1 bi
x1 ai1
εi
... ...
xn ain
ε1 a1i
... ... xi
εn ani
−1
x1
−1
x2
−1
x3
x = ⃗b
Fig. 6.108. Computer setup for a 3 × 3 system A⃗
6.32 Human-in-the-loop 195
[V] x3
− +10
x1
10
| t [ms]
x2
− −3
plant simulator, etc. This section shows a simple example of this type of simulation.
A user controls the rotational and translational movements of a spacecraft in
empty space by firing rotational and translational thrusters using a joystick.157
First of all, a spacecraft-like figure is required for the display. Using a
sine/cosine quadrature generator158 a unit circle can be generated which is then
suitably deformed using the circuit shown in figure 6.110 yielding the shape shown
in figure 6.111.159 The only important requirement here is that the spacecraft
figure has a distinguishable front and back. The output of this subcircuit forms a
time varying two-element vector
sx
⃗s =
sy
with λx = 0.04 and λy = 0.05 being reasonable values to give a suitably sized
shape.
Next, suitable signals have to be derived from the joystick interface to control
the angular velocity φ̇ and the longitudinal acceleration a of the spacecraft. Figure
6.112 shows the corresponding computer setup with −1 ≤ jx ≤ 1 and −1 ≤ jy ≤ 1
157 Cf. [Ulmann 2016] and see appendix G for a simple joystick interface.
158 See figure 4.11 in section 4.2.
159 Using a diode function generator instead of this simple setup can yield much a more
sophisticated spacecraft shape.
196 6 Examples
− sin(10ωτ )
λx sx
− cos(10ωτ )
λy sy
Fig. 6.110. Creating a spacecraft-like shape by deforming a circle Fig. 6.111. Shape of the
spacecraft as displayed
on an x, y-display
jx
φ̇
SJ
jy a
representing the two output signals from the joystick.160 The two Zener-diodes in
series, each with a Zener-voltage of 10 V, limit the output signal of the integrator
to avoid an overload condition.
Using the angular velocity φ̇, the corresponding function values ± sin(φ) and
± cos(φ), which will be used to rotate the spacecraft, can be derived. Figure 6.113
shows the computer setup yielding these functions.
+1
φ̇ cos(φ)
− cos(φ)
− sin(φ)
sin(φ)
160 Using comparators and electronic switches in conjunction with one integrator each, even
cheap digital “retro gaming” joysticks may be used instead of an analog joystick to produce
continuous voltage signals.
6.32 Human-in-the-loop 197
sin(φ) sx sin(φ)
sx rx
sy sin(φ)
sy ry
cos(φ) sx cos(φ)
as shown in figure 6.114 where ⃗r denotes the time-varying vector of the rotated
spacecraft shape. The computer setup for implementing this rotation matrix is
shown in figure 6.114. If the computer has resolver units, one of them can be used
to implement the rotation instead of using four multipliers and two summers.161
This rotated spacecraft shape must now be able to move around on the display
screen controlled by “firing” its translational thrusters which are assumed to work
only along its longitudinal direction. The acceleration a from the joystick controller
therefore must be integrated twice in a component-wise fashion taking the current
angle of rotation φ into account. Figure 6.115 shows the subprogram for this
translational movement.
In total, the overall program requires eight summers, nine integrators, six
coefficient potentiometers, nine multipliers, several free diodes, an x, y-display,
and an analog joystick.
161 Basically, a resolver combines the required function generators, multipliers, and summers,
which are required for operations like the transformation of polar coordinates into rectangular
ones, and vice versa, or the rotation of coordinate systems.
198 6 Examples
rx
k0 = 10
a x
I
cos(φ)
k0 = 10
I
sin(φ) y
ry
with g representing the gravitational acceleration. The total kinetic energy is the
sum of the kinetic energies of the pendulum bob having mass m and the moving
cart with mass M , i. e.,
1
M vc2 + mvp2 (6.87)
T =
2
y lsin(ϕ)
mg
lcos(ϕ)
ϕ l
0
x
F
M
where vc is the velocity of the cart, and vp is the velocity of the pendulum bob.
Obviously,
vc = ẋ (6.88)
and (a bit less obviously)
s 2 2
d d
vp = (x − l sin(φ)) + (l cos(φ)) .
dt dt
yielding
vp2 = ẋ2 − 2ẋlφ̇ cos(φ) + l2 φ̇2 cos2 (φ) + l2 φ̇2 sin2 (φ)
= ẋ2 − 2ẋφ̇l cos(φ) + l2 φ̇2 (6.89)
1 1
= (M + m)ẋ2 − mẋφ̇l cos(φ) + ml2 φ̇2 − mgl cos(φ). (6.90)
2 2
The Euler-Lagrange-equations of the general form
d ∂L
∂L
=
dt ∂ q̇i ∂qi
yield
d
∂L ∂L
− = F and
dt ∂ ẋ ∂x
d
∂L ∂L
− =0
dt ∂ φ̇ ∂φ
Similarly
∂L
= mlẋφ̇ sin(φ) + mgl sin(φ),
∂φ
∂L
= −mlẋ cos(φ) + ml2 φ̇ and
∂ φ̇
d ∂L
= −mlẍ cos(φ) + mlẋφ̇ sin(φ) + ml2 φ̈.
dt ∂ φ̇
d ∂L
∂L
− = (M + m)ẍ − mlφ̈ cos(φ) + mlφ̇2 sin(φ) = F (6.91)
dt ∂ ẋ ∂x
and
d ∂L
∂L
− = −mlẍ cos(φ)+mlẋφ̇ sin(φ)+ml2 φ̈−mlẋφ̇ sin(φ)−mgl sin(φ) = 0.
dt ∂ φ̇ ∂φ
The two equations (6.91) and (6.92) fully describe the motion of an inverted
pendulum mounted on a cart capable of moving along the x direction under an
external force F and can now be used to derive an analog computer setup for this
problem.
Assuming that the mass m of the pendulum bob is negligible compared to the
mass M of the moving cart equation (6.91) can be simplified to
M ẍ = F,
i. e., the movement of the pendulum mounted on the cart has no influence on the
cart’s movement. With this assumption the inverted pendulum problem can be
described just by equation (6.92) assuming that
F
ẍ = .
M
Equation 6.92 can be directly converted into the corresponding analog com-
puter program as shown in figure 6.117. Based on the input variable ẍ this setup
generates the time φ̇ derivative of the generalized coordinate φ, which is then used
to generate the two required harmonic terms sin(φ) and cos(φ) without the need
for a sine/cosine function generator, as shown in figure 6.118. Note that the term
l has been deliberately omitted by defining l := 1.
1 164 γ controls how much the
1
acceleration ẍ affects the pendulum while γ2 controls its effect on the movement
of the cart.165
The potentiometer labeled g yields the gravitational acceleration factor while
β is an extension of the basic equation of motion and introduces a damping term
for the angular velocity φ̇ of the pendulum, making the simulation more realistic.
Setting β = 0 results in a frictionless pendulum.
What would an analog computer setup be without a proper visualization of
the dynamic system? The program shown in figure 6.119 displays the cart with
its attached pendulum rod as it moves along the x-axis controlled by the input
function ẍ. To demonstrate the behavior of the system a double pole switch with
neutral middle position can be used to generate a suitable input signal ẍ to push
the cart to the left or right. With a suitable low time scale factor set on the inte-
grators yielding cos(φ) and sin(φ) the pendulum can even be manually balanced
for a short time, given some operator training.
To generate a flicker-free display an amplitude stabilized quadrature signal
pair sin(ωt) and cos(ωt) of some kHz is required; this can be derived from the
computer setup shown in figure 4.11 or 4.13 in section 4.2. This signal is used to
draw a circular or elliptical cart as well as the rotating pendulum rod mounted on
the cart.
β
cos(φ)
φ̇
ẍ γ1
sin(φ) g φ̈
γ2 −ẋ x −x
−1
k0 = 10
k0 = 10
φ̇
cos(φ) sin(φ)
The length of the rod is set by the potentiometer labeled l. Since the pendulum
rod has one of its ends fixed at the center of the cart figure the sin(ωt)-signal has
to be shifted by an amount r. The actual rotation is done by multiplication with
sin(φ) and cos(φ). The circuit yields two output signal-pairs: Px and Py are used
to draw the pendulum rod while Cx and Cy display the moving cart. Therefore,
an oscilloscope with two independent (multiplexed) x, y-inputs is required for this
simulation, if both, cart and pendulum, are to be displayed. A screenshot of a
typical resulting display with the pendulum just tipping over is shown in figure
6.120.
What happens when the mass of the pendulum bob is no longer negligible? In
this case equation (6.91) can no longer be simplified as before but must be fully
taken into account to account for the influence of the swinging pendulum on the
cart. Solving this equation for ẍ yields
1
F + mlφ̈ cos(φ) − mlφ̇2 sin(φ)
ẍ =
M +m
which can be easily transformed into a computer setup shown in figure 6.121.
6.33 Inverted pendulum 203
cos(φ)
Py
sin(ωt) l
−1
1
l +r Px
2
sin(φ) −x
Cx
c1
cos(ωt) c2 Cy
The function ẍ generated by this circuit is now fed into the associated input
of the subcircuit shown in figure 6.117. An external force acting on the cart can
be introduced by means of switch S1 while γ3 controls the strength of this force.
φ̈
m
cos(φ)
φ̇
m ẍ
sin(φ) 1
+1 M +m
−1 γ3
d ∂L
∂L
− =0
dt ∂ q̇i ∂qi
d ∂L
∂L
− = −mẍ − m (x0 + x) φ̇2 + kx − gm cos(φ) and
dt ∂ ẋ ∂x
d ∂L
∂L
− = 2m (x0 + x) ẋφ̇ + m (x0 + x)2 varphi
¨ − gm (x0 + x) sin(φ).
dt ∂ φ̇ ∂φ
ẍ = lφ̇2 − kx + g cos(φ).
6.34 Elastic pendulum 205
+1 −1
x(0) ≈ 0.05 x0
k l
−ẋ
−x
lφ̇2
ẋφ̇
φ̇2
−φ̇ +1
φ(0)
φ cos(φ) g
sin(φ)
1
+1
10
Figure 6.122 shows the corresponding analog computer program. Note that
only one divider is required due to the trick of setting g = 10
1
. This not only saves
a computing element but also reduces errors.
206 6 Examples
ϕ1
m1
ϕ2
m2
Even more complicated, but also more mesmerizing, is the simulation of the double
pendulum shown in figure 6.123 on an analog computer.166
The equations of motion will again be derived by determining the Lagrangian
L = T − V with the two generalized coordinates φ1 and φ2 . T represents the total
kinetic energy while V is the potential energy of the system. The potential energy
is
V = −g ((m1 + m2 )l1 cos(φ1 ) + m2 l2 cos(φ2 )) (6.95)
with m1 and m2 representing the masses of the two bobs mounted on the tips
of the two pendulum rods (which are assumed to be weightless). As always, g
represents the gravitational acceleration.
To derive the total kinetic energy the positions of the pendulum arm tips
(x1 , y1 ) and (x2 , y2 ) are required:
x1 = l1 sin(φ1 )
y1 = l1 cos(φ1 )
x2 = l1 sin(φ1 ) + l2 sin(φ2 )
y2 = l1 cos(φ1 ) + l2 cos(φ2 )
166 This has also been done by [Mahrenholtz 1968, pp. 159–165] which served as an inspi-
ration for this example, especially with respect to the display circuit.
6.35 Double pendulum 207
and thus
ẋ21 + ẏ12 = φ̇21 l12 cos2 (φ1 ) + φ21 l12 sin2 (φ1 )
= φ̇21 l12 cos2 (φ1 ) + sin2 (φ1 )
The Lagrangian L results from equations (6.95) and (6.96) with (6.97) and
(6.98) as
1 2 2 1
L= l1 φ̇1 (m1 + m2 ) + m2 φ̇22 l22 + m2 φ̇1 φ̇2 l1 l2 cos(φ1 − φ2 )+
2 2
g(m1 + m2 )l1 cos(φ1 ) + gm2 l2 cos(φ2 ). (6.99)
∂L
= m2 φ̇1 φ̇2 l1 l2 sin(φ1 − φ2 ) − gm2 l2 sin(φ2 ),
∂φ2
∂L
= m2 φ̇2 l22 + m2 φ̇1 l1 l2 cos(φ1 − φ2 ), and
∂ φ̇2
d ∂L
= m2 φ̈2 l22 + m2 l1 l2 [φ̈1 cos(φ1 − φ2 ) − φ̇1 sin(φ1 − φ2 )(φ̇1 − φ̇2 )]
dt ∂ φ̇2
based on (6.99). Substituting these into (6.100) yields
φ¨2
cos(φ1 − φ2 )
−φ̇1
φ̇22
1 φ̇1
2
sin(φ1 − φ2 )
g
φ̇21
sin(φ1 ) 5
φ̈1
Fig. 6.124. Implementation of equation (6.104)
φ¨1
cos(φ1 − φ2 )
φ̈2
φ̇21
φ̇2
sin(φ1 − φ2 )
φ̇22
sin(φ2 ) g
−1
+1
cos(φ(0))
sin(φ(0))
k0 = 10
k0 = 10
φ̇
cos(φ) sin(φ)
Fig. 6.126. Generating sin(φ) and cos(φ)
φ2 (0) is started, these potentiometers have to be set to cos(φ1 (0)), sin(φ1 (0)) and
cos(φ2 (0)), sin(φ2 (0)), respectively.
Since φ̇1 and φ̇2 are readily available from the circuits shown in figures 6.124
and 6.125, two of these harmonic function generators can be directly fed with
these values. The input for the third function generator is generated by a two-
input summer as shown in figure 6.127.
210 6 Examples
−φ̇1
φ̇1 − φ̇2
φ̇2
1
sin(ωt) 2
1
−1 2 l1
x1
l1 +l2
sin(φ1 )
l1
l1 +l2
y1
cos(φ1 )
l2
l1 +l2
sin(φ2 ) x2
l1
l1 +l2
l2
l1 +l2
cos(φ2 ) y2
l1
l1 +l2
With φ̇1 and φ̇2 and thus sin(φ1 ), cos(φ1 ), and sin(φ2 ), cos(φ2 ) readily avail-
able the double pendulum can be displayed on an oscilloscope featuring two sep-
arate x, y-inputs by means of the circuit shown in figure 6.128. This requires, as
before, a high-frequency input sin(ωt), which can be generated as usual.
Figure 6.129 shows a long-term exposure of the movements of the double
pendulum starting as an inverted pendulum with its first pendulum rod pointing
upwards while the second one points downwards. This simulation requires ten
integrators, 15 summers, 16 multipliers, and 17 coefficient potentiometers.
Being remarkably similar to a music synthesizer the idea of making music with
an analog computer is quite obvious. This section describes a simple computer
6.36 Making Music 211
167 In August 2020, Hainbach, Hans Kulk, and Bernd Ulmann streamed a live discussion
titled “Analog computers for music”, which can be found here: https://www.youtube.com/
watch?v=bgyzeyatS-0.
212 6 Examples
+1
mod.
+1 tri.
−1
sq.
gate
tri.
level
f (x)
out
sq.
1
−1 10
pitch
0.7726
mod.
+1
0.3388 √
12
15.5(x+1)
f (x) = 2.09
−1
Fig. 6.130. Analog computer setup as monophonic synthesizer
the output. This function generator is connected to the triangular wave output
and can be used to generate fairly arbitrary wave forms which yield interesting
sounds. The fact that it is fed a triangular wave instead of a saw tooth signal
ensures that the function set on the function generator is played in a symmetric
way, thereby guaranteeing that no steps resulting in audible clicks occur between
the end of one period of the output waveform and the start of the next.
The most interesting subcircuit is shown in the lower third of figure 6.130.
The pitch output of a typical keyboard varies linearly with 1 V/octave which is
convenient from a keyboard point of view but does not match the requirement of
a linearly increasing control voltage for the oscillator as that described above.
The reason for this logarithmic voltage/frequency relationship is due to our
Western musical culture where the frequency of a tone doubles from one octave
to the next. Given our traditional half-tone scheme with twelve half-tones com-
prising one octave, the increase in frequency from one half-tone to the next is
√
12
2. Accordingly, the voltage output of the keyboard must be fed into an expo-
nential function generator yielding a suitable control voltage for the oscillator.
Here things get a bit involved: The keyboard used in this example is pretty small
with 32 half-tones and its linear output voltage, which is always positive, must be
mapped to the analog computer value interval of [−1, 1] in order to make best use
of the variable function generator used to implement the exponential function.
6.36 Making Music 213
x g(x) x g(x)
-1.0 0.0700 1.0 0.4700
-0.9 0.0769 0.9 0.4274
-0.8 0.0846 0.8 0.3885
-0.7 0.0931 0.7 0.3533
-0.6 0.1024 0.6 0.3212
-0.5 0.1126 0.5 0.2920
-0.4 0.1239 0.4 0.2654
-0.3 0.1363 0.3 0.2414
-0.2 0.1499 0.2 0.2195
-0.1 0.1649 0.1 0.1995
0 0.1814
This mapping is done by the two summers shown in the lower sub-circuit of
figure 6.130. The function generator actually implements
√
7 12 15.5(x+1)
g(x) = 2.09 (6.106)
100
√
instead of using 12 2 as the basis. This is due to the fact that the implementation
of the triangular wave oscillator suffers a tiny bit from the unavoidable hysteresis
of the electronic comparator and its associated switch. Table 6.3 shows the 21
interpolation points of the function generator used to implement g(x).
Figure 6.131 shows the (very simple) audio adapter connecting the output of
the analog computer to a conventional amplifier or a computer sound card. The
two strings of 1N4148 diodes limit the signal amplitude to about 1.4 V to avoid
damage to the amplifier or sound card in case of excessive levels. Figure 6.132
gives an impression of the overall setup.
214 6 Examples
This section shows a simple implementation of neutron kinetics for the simu-
lation of nuclear reactors fuelled with 235 U or 239 Pu respectively.168 Following
[Fenech et al. 1973, p. 128] neutron kinetics in such a reactor can be described
by
n X
ṅ = ∗ (δK − β) + λi ci + s and (6.107)
l
i
nβi
ċi = ∗ − λi ci
l
with n being the neutron density, s representing an external neutron source (which
is often used to “start” a nuclear reactor), and l∗ representing the effective neutron
lifetime, which depends on the reactor design and geometry and can range between
values as big as 10−2 s and 10−4 s.169 ci are the precursors of the i-th group of
delayed neutrons, βi represents the fraction of the i-th delayed neutron group, and
λi is the decay constant for the i-th precursor group.
Typically, six groups of delayed neutron precursors are taken into account
for a nuclear reactor simulation. Equation (6.107) could be programmed in a
straightforward way using one integrator for each neutron group. This section
shows how a specialized computing element can be devised from actual data on
168 An overview on nuclear reactor simulation techniques can be found in [Morrison 1962].
Details of neutron kinetics and related topics are described in depth in [Weinberg et al. 1958].
An actual implementation of a nuclear power plant simulator is shown in [Fenech et al. 1973].
169 s will be ignored in the following – an external neutron source can always be modelled
by adding s to the input of the network described here.
6.37 Neutron kinetics 215
235 U 239 Pu
ηβi ηβi
i λi βi Ci = λi Ri [Ω] λi βi Ci = λi Ri [Ω]
1 0.0127 0.000237 1.9 µF 414000 0.0129 0.0000798 619 nF 1252332
2 0.0317 0.001385 4.37 µF 72187 0.0311 0.000588 1.89 µF 170128
3 0.115 0.001222 1.06 µF 82034 0.134 0.0004536 339 nF 220138
4 0.311 0.002645 0.85 µF 37828 0.331 0.0006881 208 nF 145248
5 1.4 0.000832 59 nF 121065 1.26 0.0002163 17 nF 466853
6 3.87 0.000169 4.4 nF 587268 3.21 0.0000734 2.2 nF 1416029
Table 6.4. Delayed neutron data for 235 U and 239 Pu reactor
nuclear processes that implements the neutron kinetics of a nuclear reactor with
reasonable accuracy and saves a lot of common computing elements.
Figure 6.133 shows the basic structure of a such a computing element imple-
menting the neutron kinetics for either a 235 U or 239 Pu reactor. The basic circuit
consisting of RL and either CL1 or CL2 is an integrator with
l∗ = RL · CLj , 1 ≤ j ≤ 2.
Choosing RL= 100 kΩ and CL1= 1 nF yields l∗ = 10−4 s. With the same RL a
CL2 = 100 nF results in l∗ = 10−2 s, two suitable and not too unrealistic values
for the effective neutron lifetime in a nuclear reactor. The switch S1 selects which
of these two values is to be used in a simulation run.
The six groups of delayed neutron precursors and resulting neutrons can be
modelled by series circuits consisting of RUi and CUi for the case of 235 U and
RPi and CPi for 239 Pu respectively. The capacitor values are determined by170
βi
C*i = η
λi
with η being a scaling factor to get reasonable values for the capacitors. In this
case η = 102 was chosen. With the capacitances determined this way, the resistors
can be determined according to
1
R*i = .
ηλi C*i
Table 6.4 shows the λi and βi for 235 U and 239 Pu with the corresponding
values for the resistor-capacitor networks for both cases.171
170 Cf. [Fenech et al. 1973, p.132]. C*i and R*i denote the capacitor and resistor values for
either the 235 U or 239 Pu case.
171 The values λi and βi are according to [Tyror et al. 1970, p. 22]. Actual values tend to
differ a bit in the literature.
216 6 Examples
Fig. 6.133. Basic circuit for the simulation of neutron kinetics with six groups of delayed neutrons
The circuit shown in figure 6.133 can be used directly in an analog computer
setup although it is advisable to add two electronic switches, one of which connects
RL to the summing junction of the operational amplifier only when the computer
is in operate mode, while the other switch discharges the selected capacitor CL1
or CL2 as well as the selected network for the delayed neutron groups when the
computer is in initial condition mode. Due to the relatively large values of the
series connection of resistors and capacitors, the initial condition time should be
chosen to be long enough to ensure that all capacitors are suitably discharged.
172 This section is based on [Bloch et al. 2010], [Zhan et al. 2016], http://www.hrl.
harvard.edu/analog/, retrieved December 12th , 2022, and [Brockett 1991].
6.38 Smooth sorting 217
ẋ1 = 2y12
ẏ1 = −(x1 − x2 )y1
ẋ2 = 2(y22 − y12 )
ẏ2 = −(x2 − x3 )y2
...
2
ẋn = −2yn−1
These directly yield the program for sorting four values shown in figure 6.135.
Figure 6.134 shows the result of a typical program run.
The initial conditions yi (0) should be “small”, about 100
1
has proven to work
well. The values to be sorted are given by xi (0). Figure 6.134 shows a typical
result from such a smooth sorting run. It should be noted that this approach is
very sensitive even to tiny offsets introduced by the multipliers, which will cause
the results to drift away quickly.
218 6 Examples
+1 −1
y1 (0) x1 (0)
−y1
−y1 (x1 − x2 ) y12 −x1
x1
x1 − x2
−1
+1
x2 (0)
y2 (0)
−y2 −x2
−y2 (x2 − x3 ) y22 x2
x2 − x3
−1
+1
x3 (0)
y3 (0)
−y3 −x3
−y3 (x3 − x4 ) y32 x3
x3 − x4
−1
x4 (0)
−x4
x4
this, the digital computer can not only change coefficients at run-time but
also generate functions,173 act as a delay circuit, make decisions based on the
values of analog variables, and much more.
The following examples and explanations are based on the hybrid controller for
the Analog Paradigm Model-1 analog computer, but the ideas and techniques pre-
sented are by not restricted to this particular setup.174 Similar coupling devices
have been available in the past and can be built using cheap off-the-shelf hard-
ware such as Arduino®-boards, etc.175 A hybrid controller like this will typically
provide the following functions:
Control lines: The hybrid controller must be able to control the mode of oper-
ation of the analog computer’s integrators. In the simplest case it will accept
commands like ic, op, or halt from the digital computer and set the ModeIC
and ModeOP control lines accordingly.
Depending on its connection to the digital computer there might be consider-
able delays in the exchange of signals due to communication latencies.176 This
can be problematic in applications where precise timing of IC- and OP-times is
required. It is therefore recommended that the hybrid controller itself should
feature a clock with at least a 1 µs resolution. All timing issues should be
performed locally by the hybrid controller without the need to communicate
with the attached digital computer.
The hybrid controller should also be able to sense overload conditions, to
control the POTSET line,177 and it should feature an input line for an external
halt signal.
Digital potentiometers: This type of potentiometer, similar to a multiplying
DAC, typically offers 10 bit resolution, allowing parameters to be set with a
resolution of about 0.1%.
If the input of such a potentiometer is connected to +1 or −1, it can even
be used as a simple and cheap DAC allowing the digital computer not only
173 Using a digital computer to generate functions of several variables is very useful as this
is notoriously difficult to implement on a pure analog computer.
174 Appendix C describes a simple hybrid controller for THE ANALOG THING.
175 The schematics of a simple Arduino® based hybrid controller can be found at http:
//analogmuseum.org/english/examples/hybrid/, retrieved on March 10th , 2020.
176 Latencies of up to several tens of ms are not unusual for serial communication over USB
depending on the operating system, device drivers, etc.
177 See section 2.5.
7.1 Hybrid controllers 221
to control parameters but also to feed time-varying values to the analog com-
puter.
Digital inputs: Many applications, such as parameter optimization, include
some conditions which have to be sensed by the digital computer in order
to abort a computer run or to change some parameters. This is typically
achieved by means of comparators yielding a digital output signal which can
be read by the digital computer using a digital input channel of the hybrid
controller.
Digital outputs: Using digital output lines of the hybrid controller, electronic
switches, such as those used in the CMP4 module, can be controlled by the
attached digital computer. A typical application for this is to apply step func-
tions to a simulated system.
Readout: One of the most important functions that a hybrid controller must
implement is a readout capability including provisions for data logging. Ideally,
the analog computer features a central readout system which allows every
computing element to be addressed unambiguously. The hybrid controller can
then address an individual computing element which connects its output to a
central ADC for readout.
The time required for a single readout operation is highly critical and includes
the time to address and select the computing element, the time required by
the ADC for one conversion, and the time for transmitting the digital value
to the digital computer.
If many elements have to be read during a computation over and over again, it
can be necessary to place the analog computer in HALT-mode before starting
readout operations and to reactivate OP-mode afterwards in order to avoid
excessive skew between the individual data values being read. This becomes
more and more important with increasing k0 values (time scaling) of the in-
tegrators in a given setup.
A more sophisticated hybrid controller may also offer a number of ADC chan-
nels possibly fitted with sample-hold inputs, which can be connected to the
outputs of computing elements. This allows all channels to be sampled at once
thus eliminating timing skew between the various channels.
Figure 7.1 shows the front panel of the Analog Paradigm hybrid controller (HC).
The group of 16 2 mm jacks in the upper half is connected to eight digital po-
tentiometers each with a resolution of 10 bits. The second group of 16 jacks in
the lower half connects to eight digital input lines and eight digital output lines.
On the left is the USB port used to connect to the digital host computer, while
an additional HALT-input and a trigger output are available on the lower right.
222 7 Hybrid computing
The module controls all address, data, and control lines of the analog computer’s
system bus178 and features a 16 bit ADC for readout.
The following examples give a basic overview of the general capabilities
of such a hybrid computer. More information on hybrid computing in general
with many applications can be found in [Bekey et al. 1968], [Korn et al. 1964],
[Schönefeld 1977], and other classic texts.
The Model-1’s hybrid controller used in these examples is connected to the digital
computer by means of a USB interface which emulates a serial line.179 To facil-
itate programming the digital computer a Perl180 -module (IO::HyCon) has been
implemented; it provides an object oriented interface to the hybrid controller.181
serial: This section defines the communication setup. port defines the serial de-
vice. Its name depends on the operating system and the USB-serial-adapter
being used. The remaining parameters are defaults and can be adopted un-
changed in most cases.
5 my $ac = IO::HyCon->new();
skeleton.pl
The scalar variable $ac is an object with access to all the methods implemented
in IO::HyCon to control the operation and setup of the analog computer. A call
$ac->ic() will set the analog computer to IC-mode, $ac->op() will initiate an
OP phase, and so on.
y
EXTHALT
+1
xtarget
It is assumed that the y component of the cannon’s position satisfies ycannon >
0 while the target’s y component is ytarget = 0. Using a comparator as shown in
figure 7.2, an external halt signal is generated when the shell hits the ground. This
signal is connected to the EXTHALT input of the hybrid controller. xtarget can be
set manually by the potentiometer shown below the comparator. Its output is not
connected to anything as it is only read out by the digital computer after a run
to determine the miss distance δ.
The configuration file for this hybrid simulation is shown below – the sections
serial and types have been omitted as these are identical to those in the listing
shown on page 223. The two computing elements yielding the values required
by the digital portion of the hybrid computer setup are defined in the elements
section and labeled X and TARGET, representing xshell and xtarget .
trajectory.pl
1 elements:
2 X: 0x0221
3 TARGET: 0x0051
trajectory.pl
The Perl program for this hybrid simulation is shown below and is straight-
forward. After instantiating an IO::HyCon object $ac in line 6 a readout group is
defined in line 8. All elements of such a readout group can be read at once by the
hybrid controller with minimal clock skew – something which is not a requirement
in this particular case as the analog computer has been halted when the simulated
shell hit ground and thus all variables are static.184
The following two calls to set_ic_time and set_op_time define these time
intervals as 1 and 2 ms respectively, which is sufficient in this example with all
integrators set to k0 = 103 .
The simulation starts with φ = π4 , set in line 12. Based on this value, ẋ(0) and
ẏ(0) are computed and set in lines 14 and 15. Next, a single run consisting of an
IC-period of 1 ms and an OP-phase of 2 ms is initiated by calling single_run_sync
in line 19. This method blocks further program execuation until either the defined
OP-time has been reached or an external halt signal has occurred. In this simula-
tion, the latter event will always precede a timeout since the shell will hit ground
in less than 2 ms with k0 = 103 .
After each single run xshell and xtarget are read by invoking read_ro_group().
This returns a hash containing one key-value-pair for each element defined in the
readout group. Based on these values δ is determined in line 22. This value is then
used to determine a new elevation angle φ in line 23.185 Prior to starting the next
single run with this angle the current values of xshell , xtarget , and δ are printed on
screen.
trajectory.pl
1 use strict;
2 use warnings;
3 use IO::HyCon;
4
5 my $ac = IO::HyCon->new();
6 $ac->set_ro_group(’X’, ’TARGET’);
7 $ac->set_ic_time(1);
8 $ac->set_op_time(2);
9 $ac->enable_ext_halt();
10
16 $ac->single_run_sync();
17 my $result = $ac->read_ro_group();
18 my $delta = $result->{X} - $result->{TARGET};
19 $phi -= $delta / 10;
20 printf("%+0.4f\t%+0.4f\t%+0.4f\n",
21
185 This is by no means the best way to determine the angle required to hit a defined target
location. Its sole purpose is to serve as a programming example. If this were a simulation for
an actual commercial or research project, a much more sophisticated and faster converging
method of changing φ based on the values of δ would be employed.
7.4 Data gathering 227
Using a hybrid controller it is also possible to gather data either during HALT
periods in a lenghty computer run or continuously during the OP period as shown
in the following example.186 The goal is to solve the one-dimensional wave equation
1 ∂2u
ü − = 0.
c ∂x2
The derivatives with respect to x are approximated by a difference quotient
1 ui−1 − 2ui + ui+1
ü − =0
c (∆x)2
with ∆x = x
n and n ∈ N. Defining
c cn2
λ := 2
= 2
(∆x) x
results in a form suitable to derive an analog computer setup:
ui−1 − 2ui + ui+1
üi = λ .
(∆x)2
To simplify things further it is assumed that n = 4 and λ = 1 finally yielding the
following set of coupled ordinary differential equations:
u0 = δ(t)
ü1 = u0 − 2u1 + u2
ü2 = u1 − 2u2 + u3
ü3 = u2 − 2u3 + u4
ü4 = u3 − 2u4 + u5
u5 = 0
186 Depending on the hybrid controller used, the amount of available memory for storing
data may be rather limited. Sometimes it is an option to run one problem more than once,
gathering one variable at a time instead of gathering a plethora of variables all at once if
resolution time is not an issue.
187 The sampling interval is automatically determined by the amount of free memory and
the OP-time set for the run.
228 7 Hybrid computing
u0
u1
10
0.2
u2
10
0.2
u3
10
0.2
u5 u4
10
0.2
Fig. 7.3. Computer program for solving the one-dimensional wave equation
The corresponding configuration file for the hybrid controller looks basically
like the minimal example shown on page 223 with the following changes and
additions:
wave_equation.yml
1 elements:
2 U1: 0260
3 U2: 0261
4 U3: 0262
5 U4: 0263
6 problem:
7 times:
8 ic: 10
9 op: 200
10 ro-group:
11 - U1
12 - U2
13 - U3
14 - U4
wave_equation.yml
7.4 Data gathering 229
A problem-section has been added here. It contains the timing settings, the
definition of a readout group (ro-group) and other optional subsections which are
not required here. The readout group defines a list of computing elements which
will be read during the OP phase of the simulation at equally-spaced intervals. In
this case the readout group consists of the elements labeled U1, U2, U3, and U4.
The corresponding Perl program using this configuration file is shown below:
wave_equation.pl
1 use strict;
2 use strict;
3 use warnings;
4 use IO::HyCon;
5
The method call $ac->setup() performs the setup of the analog computer
based on the problem-section. If a readout group is defined, data will be gathered
automatically by the hybrid controller during each run. The data gathered is stored
in the $ac object and can be extracted by calling the method $ac->get_data(),
which returns an array reference. Another useful method is $ac->plot(), which
can be parameterized to yield different plotting styles and requires gnuplot being
installed on the digital computer.
Calling $ac->plot() as in the example above yields the result shown in figure
7.4. The four lines in the graph represent the four variables u1 , . . . , u4 . Each vari-
able represents one location along the x-axis of the one-dimensional wave equation
problem.
In problems like this it would be advantageous to get a more 3d-ish display of
the variables being recorded. The plot() method supports such a display style,
too. Calling $ac->plot(type => ’3d’) yields the plot shown in figure 7.5 which
is much more intuitive than the overlaid 2d plots shown previously.188
188 It should be noted that the sampling interval was about half as long as in the run yielding
figure 7.4.
230 7 Hybrid computing
Fig. 7.4. Solution of the one-dimensional wave equation with the range x divided into four sections
of equal width
The following example is more demanding with respect to the interplay between
the digital and analog computers, which must work in parallel. The analog com-
puter simulates an inverted pendulum, as described in section 6.33, while the
digital computer implements a reinforcement learning 189 algorithm that learns to
keep the inverted pendulum in its upright vertical position by actively balancing
it.190
In order to learn how to balance the inverted pendulum the reinforcement
learning system running on the digital computer reads four values from the analog
computer: x and ẋ, describing the cart’s position and velocity, as well as φ and φ̇,
representing the angle and angular velocity of the pendulum mounted on the cart.
Since the setup described in section 6.33 only yields φ̇ it is necessary to derive φ
from its derivative by an additional integrator. The problem with a setup like this,
containing two subprograms, both relying on φ̇ and both employing integrators,
is that these two groups of integrators will inevitably drift apart during prolonged
simulation runs, due to unavoidable inaccuracies with the integrators’ time scale
Fig. 7.5. Solution of the one-dimensional wave equation with the range x divided into four sections
of equal width plotted with type => ’3d’
factors. To avoid this problem the setup used for this hybrid simulation, shown
in figure 7.6, employs two dedicated function generators, both using φ as their
inputs.191 Thus, φ can be derived from φ̇ by a single integrator guaranteeing that
all angle functions derived from it are consistent.
The reinforcement learning system running on the digital computer can push
the cart on which the pendulum is mounted by applying a constant force to the left
or right of the cart for a fixed time interval, like 10 or 20 ms. Figure 7.7 shows the
subprogram implementing this application of force under program control. Using
two digital outputs of the hybrid controller, D0 and D1, the cart can either be left
uninfluenced (D0 = 0) or pushed (D0 = 1) to the left or right as determined by
D1. An additional manually operated single pole, double throw switch allows the
operator to unbalance the pendulum deliberately to see how well the reinforcment
learning system can cope with unexpected perturbations.
Since the analog and digital computer work in parallel in this setup, it is of
utmost importance that communication latencies are minimized. If a real-time
operating system isn’t used on the digital computer, the process running the re-
inforcement system should run with elevated priority and any additional compu-
tational or input/output load on the digital computer reduced as far as possible.
The reinforcement learning system has been implemented in Python 3.192 The
following skeleton listing shows the simple and self-explanatory main communi-
cos(φ) y
βφ cos(. . . )
−φ sin(ωt) h
γ1 φ̇ −1 h
sin(. . . )
ẍ g
sin(φ) x
γ1 −ẋ x −x
βx
manual push
−1
+1
ẍ
+1
−1
D1
D0
Fig. 7.7. Control circuit for the controlled inverted pendulum
87 hc_send(HC_SIM_IMPULSE_1)
88 sleep(HC_IMPULSE_DURATION / 1000.0)
89 hc_send(HC_SIM_IMPULSE_0)
90
Using the analog computer as its sparring partner, the reinforcement learning
systems learns to balance the inverted pendulum based on the values x, ẋ, φ, and
φ̇. Whenever φ exceeds about 30 degrees or the cart position x leaves the interval
[−1, 1] the current simulation run is terminated and the next run is started. At
first, the reinforcement learning system acts rather randomly but improves quickly
over time. At the end of the training the pendulum is held in a stable position
more or less indefinitely.193
A⃗x = ⃗b (7.1)
The iterative technique used in the following is the Richardson method 194
⃗xi+1 = ⃗xi − ω A⃗xi − ⃗b , (7.2)
It has already been shown how partial differential equations can be solved by
discretization along all but one of the variables involved. Unfortunately, this sim-
ple approach requires an immense number of computing elements rendering it
unfeasible for problems of realistic size.
A completely different and interesting approach is the use of random walks195
to solve partial differential equations, as the following example shows. The un-
derlying theory is covered by the Feynman-Kac formula, which links parabolic
partial differential equations with stochastic processes. Of central importance here
are Wiener processes, which are also known as Brownian motion. The main re-
quirement here is to have as many independent noise sources as there are spatial
dimensions in the problem under consideration.196
The following example is mainly inspired by [Sawhney et al. 2020] and
[Sawhney et al. 2022] and demonstrates how a partial differential equation of the
form
∇ (α∇u) + ω ⃗ ∇u − σu = −f (7.4)
with
2 2 2 2
Br = (x, y) ∈ R x + y ≤ r
195 See [Henze 2018] for detailed information on random walks in general.
196 The design and implementation of good noise sources with a bandwidth up to about
100 kHz is out of the scope of this book. [Beneking 1971] gives a thorough account of the
foundations of electronic noise in general.
197 Typically, the construction of suitable meshes tends to get quickly rather complicated
with increasing complexity of the structure.
198 Named after its discoverer, Siméon Denis Poisson.
199 This is often called a screening coefficient. Accordingly, equation (7.4) is also called a
screened Poisson equation.
7.7 Solving PDEs with random walks 239
y
(1, 1)
(−1, −1)
denoting the two-dimensional ball around zero with radius r. This region is de-
picted in figure 7.8. The boundary is then
2
∂Ω = ∂Br ∪ (x, y) ∈ R |x| = 1 ∨ |y| = 1
with
if (x, y) ∈ ∂Br
2x
g(x, y) = sin(2πx) if |y| = 1
sin(2πy) if |x| = 1,
describing the boundary conditions for the PDE, i. e., u = g on ∂Ω. u may be
considered to represent a temperature within Ω with the boundary conditions
representing external heat sources or sinks.
To get the desired the solution u at some coordinate (x, y) a number of inde-
pendent random walks Xi (t) = (xi (t), yi (t)) are performed, starting at the location
Xi (0). The computing run lasts until such a random walk hits a boundary after
a runtime of τi , i. e,. Xi (τi ) ∈ ∂Ω. Due to the absorption σ, a boundary near to
(x, y) has intuitively a bigger impact on the final value of u than one further away.
This is taken into account by an exponential term e−στ with τ representing the
elapsed time until the random walk hits a boundary, yielding200
∞ −στi
X e g (Xi (τi ))
u = E e−στ g (X(τ )) = lim
.
n→∞ n
i=1
Figure 7.9 shows the analog computer setup for this problem. At the left are
two independent noise sources201 , each of which is connected to a DC block as
−1
+1 x(0) −1 k0 = 10
NG1 10
x2 e−στ
x < 0
k0 = 103
> 0
1
k0 = 1 T0
1 +1 r2
k0 = 1 T0 > 0
> 0 Ext.
k0 = 103 halt
y < 0
10
NG2 y2
+1 y(0)
−1
Fig. 7.9. Random walk program
shown in figure 5.14 in section 5.5.202 The resulting values are then integrated
yielding the x- and y-components of a single random walk.
Boundary checking consists of two parts: Detection of the outer boundary of
Ω is done by halting the analog computer when an overload occurs, i. e., when x or
y have crossed the boundary of the [−1, −1] × [1, 1] box. Detecting the boundary
of Br is more involved and is done by the program shown in the right half of figure
7.9, yielding an external halt signal which will also cause the computation run to
halt.
In addition to the actual random walk, the program also implements the
required exponential term by means of the integrator shown in the upper right.
The algorithm running on the attached digital computer is straightforward.
First, the parameters σ, r2 , and 1/T0 are set. Then (x, y) are iterated over a set of
points PΩ for which the solution is required. For each such coordinate pair (x, y)
the intial conditions x(0) and y(0) are set and an analog computer run is started.
This run lasts until a halt is triggered by either an overload or the external halt
signal. Depending on the values of x and y the boundary condition to be taken
into account is then determined by a sequence of conditional statements, and u is
updated accordingly.
Figure 7.10 shows a typical result gained by this method. The partition PΩ
equals an equidistant 50 × 50 grid, and for each point (x, y) ∈ PΩ 200 random
walks were performed.
202 The two integrators forming these DC blocks are running continuously, i. e., not under the
IC/OP control of the analog computer. If the filtering integrators were to be reset repeatedly,
this would render the random signal unusable as the filter needs some time T1 to reach a
0
steady state.
7.7 Solving PDEs with random walks 241
1.0 1.0
0.5 0.5
u(x, y)
0.0 0.0
y
−0.5 −0.5
−1.0 −1.0
−1.0 −0.5 0.0 0.5 1.0
x
– the extremely high degree of parallelism that this class of machines exhibits,
– the high overall computational power, and
– the high energy efficiency.
the long term.203 Increasing the clock frequency of digital computers further to
achieve higher computing power is not an easy feat since the problem of heat
removal gets increasingly worse.
Digital computers are plagued by other problems as well – one of the most
important effects is the upper bound of parallelism that can be obtained by an
algorithmic approach for a certain problem, as described by Amdahl’s law. Only
few problems are so well behaved that they can exploit the computing power of
a highly-parallel supercomputer with thousands and sometimes several millions
of individual cores. The majority of programs can’t be scaled effectively over so
many processing units.
Obviously, alternative approaches to classic digital computing are required to
fulfill the ever increasing demands for computational power. Analog computers
will be a central part of such novel systems, where they will act as coprocessors to
speed up time-consuming or time-critical simulations, etc., as they do not suffer
from the aforementioned problems.
Due to the representation of values as continuous voltages or currents and
the continuous time of operation within an analog computer, all signals (with
the exception of comparator and switch outputs) representing variables are “well
behaved”, i. .e, they do not exhibit discontinuities. This results in a very high
energy efficiency.
Abandoning the idea of a central memory system which stores commands and
data to be processed in favor of interconnecting individual computing elements
in order to set up an analog computer for a given problem, there is nothing like
a von Neumann bottleneck. Further, the abdication of an algorithm allows for a
perfect degree of parallelism of the various computing elements within an analog
computer.
Nevertheless, a lot of development work is required in the future to turn the
classic analog computer with its cumbersome patch panel into a useful, general
purpose, reconfigurable integrated circuit.204 A main challenge will be the design
and implementation of the configuration circuitry which will allow the elements
of such an analog computer to be interconnected automatically under the control
of a digital computer acting as a host system. In addition to the hardware, a soft-
ware ecosystem must be developed, including a suitable (standardized) hardware
description language to define the required interconnections of a particular analog
203 It has been estimated that the world wide digital information technologies may account
for 10 to 20% of the world’s energy consumption in 2030. Even today the total energy consump-
tion of these systems is well above 5% (see [Gelenbe et al. 2015], [Heddeghem et al. 2014],
[Jones 2018], or [Belkhir et al. 2018]).
204 Currently quite a lot of research is being carried out in academia and industry to develop
such ICs, but in 2023 there are no general purpose systems commercially available.
8 Summary and outlook 245
computer setup, libraries allowing the tight integration of the analog computer on
a chip into a hybrid computer are required, etc.
These challenges are formidable, but the possibilities of analog computing are
nearly endless and will more than justify all the necessary research and develop-
ment expenses.
Given the high energy efficiency of analog computers compared with digital
approaches, they will create new fields of application – such as medical implants,
which might consume so little electrical energy that they could be powered by
energy harvesting within the body, thus making energy storage devices that must
be changed or recharged at regular intervals superfluous. It will also find numerous
applications in mobile and wearable devices for signal pre- and post-processing to
save precious battery capacity. Even complex tasks such as trigger word detection
for assistance systems can be implemented using analog computer techniques,
saving power in standby mode, etc.
Other applications will benefit from the fact that a system that is not con-
trolled by a program stored in some volatile memory cannot be “hacked” in a
traditional sense.205 . This will make analog computers the systems of choice when
it comes to control systems in critical infrastructure systems, systems on which
human lives depend, etc.
Also, most branches of artificial intelligence, (AI ), and machine learning will
benefit substantially from the application of analog computing techniques. Spiking
and bursting neurons can be directly implemented as analog circuits and the fact
that neuronal networks are basically defined by their interconnection structure,
together with their synaptic weights, fits perfectly with the paradigm of analog
computing.
All in all, analog computing is an extermely promising approach for future
high performance and/or low energy computing and will be part of most future
computing systems.
205 If a reconfigurable analog computer is used, the reconfiguration capability must be phys-
ically disabled to gain that benefit.
The Laplace transform
A
One of the standard tools for solving differential equations analytically is the
Laplace transform, a parameter integral named after Pierre Simon Laplace.
Although this is not directly connected to analog computer programming, the
basic ideas and techniques are quite interesting and can sometimes be used to
cross check the analog computer solution of a problem.206
It is defined as
Z∞
F (s) = L (f (t)) = f (t)e−st dt (A.1)
0
This section presents the derivation of the Laplace transforms for some basic
functions.
206 Further information on this particular transform can be found in [Widder 2010]. Three
wonderful historic texts on the subject are [Carslaw et al. 1941], [van der Pol 1987], and
[Doetsch 1970]. Its particular application to problems in engineering and especially con-
trol theory can be found in many texts such as [Sensicle 1968] and [Föllinger et al. 2021].
[Duffy 1994] shows advanced transforms for the solution of partial differential equations.
248 A The Laplace transform
Z∞ " #∞
a a −st a
L(u(t)) = − φ
e dφ = − e = . (A.3)
s s s
0 0
In many cases, the Dirac delta function 209 δ(t) is used as input for a dynamic
system.210 It has zero width but satisfies
Z∞
δ(t) dt = 1
−∞
and is therefore often called the unit impulse. It can be understood as the derivative
of the unit step function u(t). Its Laplace transform can therefore be derived by
integrating by parts as follows:
Z∞
d
L(δ(t)) = u(t) e−st dt
dt
0
" #∞ Z∞
−st
u(t) −se−st dt
= u(t)e −
0 0
Z∞
=0+s e−st dt
0
Thus, L(δ(t)) = 1.
eiφ − e−iφ
sin(φ) = and (A.6)
2i
eiφ + e−iφ
cos(φ) = . (A.7)
2
Using (A.6) gives
Z∞ Z∞
eiωt − e−iωt
−st
L(sin(ωt)) = sin(ωt)e dt = e−st dt
2i
0 0
Z∞ ∞ ∞
1 e(iω−s)t 1 e−(iω+s)t
1
= e(iω−s)t − e−(iω+s)t dt = −
2i 2i iω − s 0 2i −iω − s 0
0
1 1 1 ω
= − = .
2i −s + iω −s − iω s2 + ω2
Things really get interesting when the Laplace transforms of operations instead
of simple functions are computed. This is shown below for the time derivative and
the time integral of a function. Let
dx
f (t) = ,
dt
then integration by parts gives
Z∞
dx −st
L(f (t)) = e dt
dt
0
" #∞ Z∞
−st
= xe − −sxe−st dt
0 0
= sL(x) − x(0).
Here x(0) denotes the initial value of x at t = 0. The interesting fact is that
the time derivative of a function f (t) can be transformed into the much simpler
multiplication of the Laplace transform of the function with s.
This can be extended to higher derivatives as well. From
d2 x
f (t) =
dt2
follows that
Z∞
d2 x −st
2 dx
L(f (t)) = e dt = s L(f (t)) − sx(0) −
dt2 dt t=0
0
and so forth.212
If multiplication with s is the transform of a time derivative, it is tempting to
assume that integration will be transformed into a division by s. This, indeed, is
the case. Let
Zt
f (t) = x dt + c
0
with c being a constant real value. Its Laplace transform is
Z∞ Zt
c
L(f (t)) = x dt e−st dt +
s
0 0
dx
212 dt
will often be written as [ẋ]t=0 for brevity.
t=0
252 A The Laplace transform
∞
Z∞
Zt
1 1 c
= − e−st x dt + xe−st dt +
s s s
0 0 0
1 c
= L(f (t)) + .
s s
The constant c
s is the initial value of the integral.
Given two functions f (t) and g(t) which can be subjected to a Laplace transform
yielding F (s) and G(s), the transform has the following useful characteristics:
−T s
L (f (t − T )u(t − T )) = e F (s) (time shift)
L ((f ⋆ g)(t)) = F (s)G(s) (convolution)
213 The line integral of an analytic function f (z) over a closed curve γ can be determined by
I n
X
f (z) dz = 2πi I(γ, aj )Res(f, aj ).
j=1
γ
Here, the aj denote a finite set of points where f (z) has singularities while I(aj ) is the winding
number of the path γ around the point aj and Res(f, aj ) is the residue of f (z) at the point
aj . See [Carslaw et al. 1941], [Duffy 1994], etc., for more information on this technique.
A.5 Example 253
L (δ(t)) = 1
L (δ(t − T )) = e−T s
1
L (u(t)) =
s
1 1
L √ = √
πt s
n!
L (tn ) = n+1
s
n!
L tn e−at =
(s + a)(n+1)
1 −at
1
L 1 − (1 + at)e =
a s(s + a)2
−at
− e−bt
e 1
L =
b−a (s + a)(s + b)
ω
L (sin(ωt)) = 2
s + ω2
s
L (cos(ωt)) = 2
s + ω2
ω
L e−at sin(ωt) =
(s + a)2 + ω 2
s+a
L e−at cos(ωt) =
(s + a)2 + ω 2
ω
L (sinh(ωt)) = 2
s − ω2
s
L (cosh(ωt)) = 2
s − ω2
√ 2
a + ω2
−1 ω s+a
L sin ωt + tan = 2 (A.8)
ω a s + ω2
ω
sin(ωt)
L = arctan
t t
A.5 Example
ẍ + x = δ(t)
s2 L(x) − s + L(x) = 1,
(s2 + 1)L(x) = s + 1
the right-hand side of which matches equation (A.8) in the preceding section with
a = 1 and ω 2 = 1 yielding the final solution of this differential equation:
√ π
x = 2 sin t +
4
In engineering, dynamic systems are often described as block diagrams showing all
forward and feedback paths of the system under consideration, such as transducers,
amplifiers, servomotors, etc. The individual blocks are described by a transfer
function Φ which defines how the input and output of the block are linked together:
output
= Φ.
input
214 Instead of the Laplace transform variable s, which is identified with differentiation and
integration in the case of 1s , the classic p operator is often used:
Zt
d 1
p= , = dt
dt p
0
A.6 Block diagrams and transfers functions 255
The two-dimensional heat equation as treated in section 6.30 can also be solved
by means of a two-dimensional grid consisting of a number of identical passive
resistor/capacitor (RC ) nodes as shown below.215 The aim is to model the heat
flow in a two-dimensional sheet of thermally conductive material such as a thin
metal plate. The flow is described by
u̇ = α∇2 u. (B.1)
with ui,j denoting a single grid node, and qi,j representing a heat source or sink
connected to this node. Such a node is shown in figure B.1 and consists of four
resistors and a capacitor with a common junction. Applying Kirchhoff’s current
law to the center node ui,j yields the following expression for the voltage at this
node:
(ui,j−1 − ui,j ) + (ui,j+1 − ui,j ) + (ui−1,j − ui,j ) + (ui+1,j − ui,j )
−C u̇i,j +Ii,j = 0.
R
215 The author would like to thank Dr. Chris Giles for his invaluable support in setting up
this example.
B Solving the heat equation with a passive network 257
Ii,j R
R
j
i,
u
R
R C
The terms in the enumerator are the currents flowing through the resistors, the
second term describes the current through the capacitor, and the last term is an
optional current that may be injected into this node. Rearranging then yields
Equations (B.1) and (B.2) have the same structure so that a grid consisting
of many nodes as shown in figure B.1 may be used to solve the two-dimensional
heat equation problem216 by considering
216 Approaches like this were in widespread use well into the 1960s for solving a wide range
of heat transfer, electromagnetic, and fluid flow problems.
217 The symmetry along the 45 degree diagonals can not be as easily exploited using a passive
model like this as it was in the example shown in section 6.30.
258 B Solving the heat equation with a passive network
quadrant to
be solved
axes of
symmetry
equipotential
boundary
plate
u7,7
u7,0
I0,0
u0,0 u0,7
Fig. B.5. Steady state of the network and differences between adjacent nodes
The two graphs shown in figure B.5 depict the steady state after forcing the
temperature at node u0,0 to a fixed value. The left hand picture shows the tem-
perature distribution across the plate quadrant. The graph on the right shows
the heat flux densities, which were obtained by measuring the voltages across the
individual resistors which correspond to the heat flux between adjacent nodes.
This yields the x- and y-components of the heat flux which allowed the heat flux
magnitudes in between the nodes to be calculated.
Figure B.6 shows the time dependent voltages at the diagonal nodes ui,i after
injecting a pulse I0,0 at the corner grid node u0,0 . The diffusion of the temperature
across the plate quadrant can clearly be seen as well as the eventual settling of
the nodes along the diagonal to the boundary condition temperature as defined
by the inputs α and β.
As inflexible as passive networks like this are with regard to their spatial struc-
ture, they can be used to solve partial differential equations of a given structure
with varying boundary conditions very efficiently. [Volynskii et al. 1965] contains
a lot of practical examples and theoretical background.
260 B Solving the heat equation with a passive network
Fig. B.6. Response to a step input at nodes u0,0 , u1,1 , . . . u7,7 (from left to right and up/down)
A simple hybrid controller for THE ANALOG
C
THING
This chapter describes a simple Arduino®218 based hybrid controller for THE
ANALOG THING.219 At its heart is an Arduino® Mega 2650 single board micro-
controller, which is connected to the HYBRID connector of the analog computer.
The microcontroller can take over control of the analog computer, i. e., it can con-
trol its mode of operation (IC, OP, or HALT), and it can sample data from up to
four analog outputs.220
Table C.1 shows the connections between this connector and the Arduino®
input/output pins.221 These connections allow the Arduino® to control the oper-
ation of the analog computer and to gather data from the four analog outputs.
The hybrid controller software222 implements the following commands:
arm: Arm the data logger for data capturing to start at the begin of the next
single run.
channels=<value>: This command sets the number of channels that are to be
logged. <value> can be set to 1, 2, 3, or 4.
disable: Disable the hybrid controller.223 This allows THE ANALOG THING
to work as a standalone system without the hybrid controller interfering.
enable: Enable the hybrid controller, which then takes over control of the at-
tached analog computer.
halt: Set the analog computer to HALT-mode.
help: Print a short help text listing all of the available commands.
ic: Set the analog computer to IC-mode.
ictime=<value>: Set the initial condition time to <value> milliseconds. If any of
the integrators of the analog computer are configured to run in SLOW mode
(corresponding to k0 = 10), this interval should be at least several ten mil-
liseconds long.
interval=<value>: This sets the data sampling interval to <value> milliseconds.
Data sampled will be transmitted through the USB-connection of the micro-
controller with individual channel values separated by a semicolon.
op: Set the analog computer to OP-mode.
optime=<value>: Set the operation mode time to <value> milliseconds. This set-
ting, together with ictime, controls the behavior of repetitive operation or
single run.
rep: Start repetitive operation. The attached analog computer cycles between IC-
and OP-mode in this mode until switched explicitly to IC, OP, or HALT mode
by the hybrid controller.
run: Start a single run on the analog computer.
status: Return general status information about the configuration of the hybrid
controller.
Figure C.1 shows a typical setup consisting of THE ANALOG THING and an
attached microcontroller board. A typical command sequence for performing a
single run of a program gathering one channel of data would look like this:
ictime=50
optime=3000
interval=5
channels=1
arm
enable
run
Fig. C.2. Result from a three second run of the Hindmarsh-Rose model
An oscilloscope multiplexer
D
Most oscilloscopes provide at least two y-channels which allow the display of two
curves at once on the screen. Although this is often sufficient for work with an
analog computer, sometimes it is desirable to have not only two y-inputs but
several pairs of x- and y-inputs. This allows multiple figures, each defined by a
corresponding x, y signal pair, to be displayed. Since this is not possible with most
standard oscilloscopes the need for an x, y oscilloscope multiplexer arises.226
Figure D.1 shows the prototype implementation of the four channel multi-
plexer described in the following.
The schematic is shown in figure D.2. It features four y- and four x-inputs and
yields a single y- and x-output to the oscilloscope. In addition to this, a z-signal
is generated which can be used for automatic beam blanking if the oscilloscope
supports an external beam control input.
The lower third of the figure contains the clock circuitry which can be run in
either of two modes as determined by the position of switch S1. In its top position
the multiplexer is free running controlled by a simple astable multivibrator based
on a NE555 (IC1). If S1 is set to the lower position, the clock signal is activated
every time the analog computer is in OP-mode. This is especially useful when
repetitive operation is selected as the oscilloscope will switch from one input pair
to the next in conjunction with the operation cycles. This circuit also generates
the z-signal which turns the beam off whenever the analog computer is not in OP-
mode as well as while switching from one input channel to another. The operational
amplifier IC3 in conjunction with the potentiometer R5 is used to adjust the
blanking signal level between +15 V and −15 V to suit the particular oscilloscope.
226 If this functionality is only rarely used, the electronic switches of an analog computer can
be used in pairs with proper control logic.
266 D An oscilloscope multiplexer
The middle section of the schematic shows the two bit counter which selects
one out of the four input signal pairs. The counter itself is built from two JK-flip-
flops (IC6). Using switch S2 the number of channels displayed can be selected from
one to four. The two address bits feed a two-to-four demultiplexer (IC4B) which
yields four active-low logic signals controlling the two electronic quad switches
IC7 and IC9. Both outputs to the oscilloscope are buffered by simple impedance
converters (IC8 and IC10).
D An oscilloscope multiplexer
267
227 This resistor can also be implemented by paralleling four high-precision 1M resistors if
250k are not readily available.
270 E A log() function generator
It should be noted that the LOG112 has a rather limited bandwidth, so its
application in simulations involving high time scale factors should be given careful
consideration.
A sine/cosine generator
F
In some cases, such as the inverted pendulum simulation which was used to train
a machine learning system, basic harmonic functions such as sin(φ) and cos(φ)
cannot be generated using the quadrature generator described in section 6.33,
figure 6.118, because computing element variations will cause φ̇ and the derived
functions sin(φ) and cos(φ) to drift apart over time.
If the argument φ can be guaranteed to be in an interval like [−1, 1], func-
tions like sin(φ), etc., can be generated using special function generators based
on polygonal approximation. The AD639228 is a rare but useful integrated circuit
which generates basic trigonometric functions; this was used to build the dedicated
function generator shown in figure F.1.
Both potentiometers R2 and R6 have to be adjusted so that an input volt-
age in the range of [−2π, 2π] rad = b [−10, 10] V is mapped to the interval
b [−1, 1] =
[−7.2, 7.2] V.
Fig. F.1. Sine/cosine generator based on the AD639 universal trigonometric function converter
A simple joystick interface
G
Since an analog computer is ideally suited for all kinds of dynamic systems simu-
lations, a joystick interface is a really versatile peripheral device as it makes “man
in the loop” simulations possible. This appendix describes a simple adapter cir-
cuit which allows a two-channel analog joystick, as commonly used for controlling
models, to be used with an analog computer.
The joystick used here is a two-channel joystick from an old model remote
control. It consists of two 4k7 (4700 Ω) potentiometers – one for the x- and one
for the y-direction. A simple handheld enclosure was built, as shown on the left
in figure G.2. This also holds a push button which can be used to control an
electronic switch or a comparator in the analog computer.
The circuit shown in figure G.1 is pretty straightforward: First, the two refer-
ence voltages of ±10 V are buffered by the operational amplifiers IC1A and IC1B.
R1 and R2 are the joystick’s potentiometers while R1* and R2* are used to set the
origin of the joystick (putting the joystick into its middle position with respect to
x and y should yield a value of 0 on each channel).
Since the potentiometers used in a typical joystick allow for a 270 degree travel
and as the joystick only allows for a much smaller deflection, the wiper will never
hit its end positions. Accordingly, the output signals x and y must be amplified to
satisfy 10 V ≤ x, y ≤ +10 V. This is done by the two operational amplifiers IC1C
and IC1D. The capacitors C1 and C2 suppress any unwanted oscillations which
otherwise may occur. The slight non-linearity due to the load resistance of 10 k
on the wipers of the joystick potentiometers is negligible in this application, so no
additional buffer stage is required.
The switch S1 which is also mounted in the joystick’s enclosure can be seen
on the right side of the schematic in figure G.1. As long as it is open the output is
tied to +10 V by means of a 100k resistor. Closing the switch will yield a negative
274 G A simple joystick interface
value so that either a comparator or an electronic switch (in the case of an Analog
Paradigm Model-1 analog computer) can be controlled by this switch.
The right half of figure G.2 shows the prototype circuit built on a standard
160×100 mm2 breadboard. The joystick is plugged into the 9-pin SUB-D connector
visible on the upper left of the circuit board. The 2 mm sockets yield the output
signals (x and y as well as the signal from the push button switch).
The Analog Paradigm bus system
H
The Analog Paradigm model-1 analog computer features a bus system that makes
it easy to extend the system with additional modules for special operations such
as the logarithm function generator described in section E. All computing modules
are based on standard Eurocards (160 mm×100 mm) using two-row (A/C) DIN
41612 connectors. Table H.1 shows the signals available on this bus system.
When designing new hardware for the Analog Paradigm Model-1 analog com-
puter, the following constraints should be taken into account:
– Signals denoted by an overline are active-low logic signals.
– Analog and digital ground (AGND/DGND) should never be connected to
each other on a module as this would introduce a ground loop and deteriorate
system performance.
– When POTSET is active, i. e., low, the computer is set to potentiometer set-
ting mode. The integrators are halted and the inputs of all potentiometers
are connected to the positive reference voltage. Using the readout feature de-
scribed below, the value set on each potentiometer can then be displayed on
a DVM.
– The OVERLOAD line can be driven by a computing module to signal an
overload condition to the control unit. The driver must be of the open collector
type. (A standard TTL-driver can be used but must be decoupled by a diode.)
– The signals MODEIC and MODEOP control the operation of the integrators
of the overall system.
– To read the output value of a computing element its address must be placed
onto the address lines A15–A0 and the READ line must be driven low. The
element selected in this way will then connect its output to the READOUT
line by means of an electronic switch (or a relay). In addition to this, each
276 H The Analog Paradigm bus system
A Pin no. C
+15 V 1 +15 V
−15 V 2 −15 V
AGND 3 AGND
+10 V 4 +10 V
−10 V 4 −10 V
AGND 6 AGND
READOUT 7 READOUT
AGND 8 AGND
AGND 9 AGND
POTSET 10 OVERLOAD
MODEOP 11 MODEIC
RS4 12 RS3
RS2 13 RS1
A15 14 A14
A13 15 A12
A11 16 A10
A9 17 A8
A7 18 A6
A5 19 A4
A3 20 A2
A1 21 A0
22
23
RS0 24 CSELECT
WRITE 25 READ
D7 26 D6
D5 27 D4
D3 28 D2
D1 29 D0
SCL 30 SCA
+5 V 31 +5 V
DGND 32 DGND
computing element can drive the data lines (D7–D0) to show its module type
which can then be displayed on a readout unit.
– The 16 address bits consist of a rack number (A15–A12 – this will be typically
0000), a chassis number (A11-A8 – from bottom to top in a rack), a slot
number (A7–A4 – from left to right), and an element number (A3–A0). Each
card can hold up to 16 computing elements.
– The lines labeled RS, SCA, SCL, and WRITE are reserved for future use.
HyCon commands
I
The Model-1 hybrid controller, HC, is connected by means of a serial line USB
interface to a digital computer which sends appropriate commands to control
the operation of the analog computer. Using a serial line terminal program it is
possible to control the analog computer by typing in these commands:229
A: Enable halt-on-overflow. When enabled, any overload condition will immedi-
ately HALT the analog computer so that the element(s) causing this condition
can be identified by their respective overload indicators.
a: Disable halt-on-overflow.
B: Enable external halt – a trigger signal on the HALT-input of the hybrid con-
troller will place the analog computer into HALT mode regardless of its current
mode of operation.
b: Disable external halt.
c: This command expects a six-digit decimal number which specifies the OP-time
for repetitive or single-run operation in milliseconds. If values less than 105
milliseconds are required, which is typically the case, the number has to be
padded with zeros on the left.
C: This command also expects a six digit decimal number specifying the time for
setting initial conditions in milliseconds.
d: Clear the digital output port specified by an address in the range of 0 to 7.
D: Like d but set the digital output port specified by a single-digit address follow-
ing the command.
229 It should be noted that these commands should not be followed by a carriage return
(<CR>) and/or line feed (<LF>) control character. The hybrid controller firmware parses its
input data in a bytewise fashion and is not record oriented. Spurious <CR>/<LF> or whitespace
characters will result in error messages as they are regarded as illegal commands.
278 I HyCon commands
e: Issuing this command starts repetitive operation. The analog computer will
cycle through the modes IC and OP with the respective times set by the c-
and C-commands until it is halted.
E: This command starts a single-run cycle, i. e., the analog computer runs through
the sequence IC, OP, HALT with the IC- and OP-times as specified above.
f: If a readout-group has been defined with the G-command, issuing an f will read
all of the elements in that group sequentially and return their values.
F: This command, too, starts a single-run cycle. The main difference to the E-
command is that a message will be returned at the end of the cycle, allowing
the analog and digital computer to be synchronized.
g: To read the value of a single computing element, the g-command, followed by
a four digit hexadecimal address, is used.230 The hybrid controller will return
the ID of the computing element as well as its current value.
G: This command expects a semicolon-separated list of four digit hexadecimal
addresses which must be terminated by a single dot to let the hybrid controller
know that no more entries follow.
h: Set the analog computer into HALT-mode.
i: Set the analog computer into IC-mode.
l: Return all values sampled during the last OP cycle if a readout group had been
defined previously.
o: Set the analog computer to OP-mode.
P: This command for setting a digital potentiometer expects the four digit hex-
adecimal address of the module containing the potentiometer to be set, fol-
lowed by a two digit address of the potentiometer, followed by a four digit
decimal (!) value in the range of 0000 to 1023.231 To set the digital poten-
tiometer number three on the module with the base address 0x0090 to 1/2,
the command P0090030511 would be required.
q: This prints out the current digital potentiometer setting.
R: Read all eight digital input ports and return their respective values.
s: Print out status information.
S: Set the analog computer into POTSET-mode. This is useful for setting manual
potentiometers – see section 2.5.
t: During an OP-phase this command will print the elapsed time since the OP-
mode started.
x: Reset the hybrid controller.
X: Configure a crossbar switch module.232
?: Print help.
The following example shows how the hybrid controller can be controlled by man-
ually issuing appropriate commands.233 First, the OP- and IC-times are set to 500
and 100 milliseconds respectively as echoed by the controller (lines 1 to 4). Then a
repetitive run is started by issuing an e-command which is eventually terminated
by forcing the analog computer into HALT-mode (line 6). Then the digital output
with address 0 is set to high before another repetitive run is started and later
terminated by the h-command.
The command in line 10 reads the value from the computing element with the
hexadecimal address 0160. The hybrid controller returns 0.0061 1, the first part
being the value read while the second number is the identification number of the
computing element being read.
Line 12 defines a readout group consisting of two computing elements with the
respective hexadecimal addresses 0160 and 0161. Issuing an s-command returns
complete status information of the hybrid controller also containing the addresses
of all elements in the readout group. The f-command in line 16 reads all elements
of this group and returns their values as -0.0004;-0.0002.
example.yml
1 c000500
2 T_OP=500
3 C000100
4 T_IC=100
5 e
6 h
7 D0
8 e
9 h
10 g0160
11 0.0061 1
12 G0160;0161.
13 s
14 STATE=NORM,MODE=IC,EXTH=DIS,OVLH=DIS,IC-time=100,OP-time=500,
15 MYADDR=90,RO-GROUP=352;353
16 f
17 -0.0004;-0.0002
example.yml
233 The line feeds after each command have been inserted for better readability. The hybrid
controller does not expect line feed control characters after commands.
Bibliography
[Belkhir et al. 2018] Lotfi Belkhir, Ahmed Elmeligi, „Assessing ICT global
emissions footprint: Trends to 2040 & recommendations“, in Journal of
Cleaner Production 117, Elsevier, 2018, pp. 448–463
[Bekey et al. 1968] George A. Bekey, Walter J. Karplus, Hybrid Compu-
tation, John Wiley & Sons, Inc., 1968
[Beneking 1971] Heinz Beneking, Praxis des Elektronischen Rauschens, Bibli-
ographisches Institut, 1971
[Bloch et al. 2010] Anthony M. Bloch, Alberto G. Rojo, “Sorting: The
Gauss Thermostat, the Toda Lattice and Double Bracket Equations”, in
[Hu et al. 2010, p. 35 et seq.]
[Borchardt] Inge Borchardt, Kepler und die Atomphysik – Der Beschuss
eines Atomkerns mit Alphateilchen auf einem Tischanalogrechner, Demon-
strationsbeispiel 3, AEG-Telefunken, ADB 003 0570
[Bowman 1958] Frank Bowman, Introduction to Bessel Functions, Dover Pub-
lications, Inc., 1958
[Bracher et al. 2021] Johannes Bracher, Daniel Wolfram, Tilmann
Gneiting, Melanie Schienle, “Vorhersagen sind schwer, vor allem die
Zukunft betreffend: Kurzzeitprognosen in der Pandemie”, in Mitteilungen
der Deutschen Mathematiker-Vereinigung, 2021, 29, 4, pp. 186–190
[Brock 2019] David C. Brock, “The Shocking Truth Behind Arnold
Nordsieck’s Differential Analyzer”, in IEEE Spectrum, 30. Nov. 2017,
https://spectrum.ieee.org/tech-history/dawn-of-electronics/the-
shocking-truth-behind-arnold-nordsiecks-differential-analyzer,
retrieved September 24th , 2019
[Brockett 1991] R. W. Brockett, “Dynamical Systems That Sort Lists, Di-
agonalize Matrices, and Solve Linear Programming Problems”, in Linear
Algebra and its Applications, 146 (1991), Elsevier Science Publishing Co.,
Inc., pp. 79–91
[Bronstein et al. 1989] I. N. Bronstein, K. A. Semendjajew, Taschenbuch
der Mathematik, 24. Auflage, Verlag Harri Deutsch, Thun und Frank-
furt/Main, 1989
[Bryant et al. 1962] Lawrence T. Bryant, Marion J. Janicke, Louis C.
Just, Alan L. Winiecki, Engineering Applications of Analog Computers,
Argonne National Laboratory, ANL-6319, October 1962
[Bywater 1973] R. E. H. Bywater, The Systems Organization of Incremental
Computers, PhD thesis, University of Surrey, 1973
[Carlson et al. 1967] Alan Carlson, George Hannauer, Thomas Carey,
Peter J. Holsberg (eds.), Handbook of Analog Computation, 2nd edition,
Electronic Associates, Inc., Princeton, New Jersey, 1967
[Carslaw et al. 1941] H. S. Carslaw, J. C. Jaeger, Operational Methods in
Applied Mathematics, Oxford at the Clarendon Press, 1941
Bibliography 283
[Chang 2017] Tai-Ping Chang, “Chaotic Motion in Forced Duffing System Sub-
ject to Linear and Nonlinear Damping”, in Hindawi – Mathematical Prob-
lems in Engineering, Volume 2017, Article ID 3769870, 15
[chromatic 2015] chromatic, Modern Perl, Onyx Neon, Inc., 2015, http:
//www.onyxneon.com/books/modern_perl/index.html, retrieved October
3rd , 2019
[Cope 2017] Greg Cope, The Aizawa Attractor, http://www.algosome.com/
articles/aizawa-attractor-chaos.html, retrieved December 15th , 2018
[Cunningham 1954] W. J. Cunningham, Time-Delay Networks for an Analog
Computer, Office of Naval Reserach, Nonr-433(00), Report No. 6, August 1,
1954
[Doetsch 1970] Gustav Doetsch, Einführung in Theorie und Anwendung der
Laplace-Transformation, Birkhäuser Verlag Basel und Stuttgart, 1970
[Duffing 1918] Georg Duffing, “Erzwungene Schwingungen bei veränderlicher
Eigenfrequenz und ihre technische Bedeutung”, Sammlung Vieweg, Heft
41/42, Braunschweig 1918
[Duffy 1994] Dean G. Duffy, Transform Methods for Solving Partial Differen-
tial Equations, CRC Press, 1994
[EAI PACE 231R] EAI, PACE 231R analog computer, Electronic Associates, Inc.,
Long Branch, New Jersey, Bulletin No. AC 6007
[EAI 1964] EAI, Solution of Mathieu’s Equation on the Analog Computer, Ap-
plications Reference Library, Application Study: 7.4.4a, 1964
[EAI 1.3.2 1964] EAI, Continuous Data Analysis with Analog Computers Using
Statistical and Regression Techniques, EAI Applications Reference Library
1.3.2a, 1964, http://bitsavers.org/pdf/eai/applicationsLibrary/
1.3.2a_Continuous_Data_Analysis_with_Analog_Computers_Using_
Statistical_and_Regression_Techniques_1964.pdf
[Fano 1950] R. M. Fano, “Short-Time Autocorrelation Functions and Power
Spectra”, in The Journal of the Acoustical Society of America, Volume 22,
Number 5, pp. 546–550
[Fenech et al. 1973] Henri Fenech, Jay Bane, “A nuclear reactor analog sim-
ulation for undergraduate nuclear engineering education”, in Simulation,
Volume 20, Issue 4, April 1974, pp. 127–135
[Fifer 1961] Stanley Fifer, Analogue Computation – Theory, Techniques and
Applications, Vol. III, McGraw-Hill Book Company, Inc., 1961
[Fischer 2022] Thomas Fischer, First Steps – THE ANALOG THING,
https://the-analog-thing.org/THAT_First_Steps.pdf, retrieved De-
cember 14th , 2022
[Föllinger et al. 2021] Otto Föllinger, Mathias Kluwe, Laplace-, Fourier-
und z-Transformation, VDE Verlag GmbH, 11. Auflage, 2021
[Forbes 1957] Georges F. Forbes, Digital Differential Analyzers, Fourth Edi-
tion, 1957
284 Bibliography
[Sawhney et al. 2020] Rohan Sawhney, Keenan Crane, “Monte Carlo Geom-
etry Processing: A Grid-Free Approach to PDE-Based Methods on Volu-
metric Domains”, ACM Trans. Graph., Vol 38, No. 4, Article 1, July 2020,
pp. 1-1–1-18
[Sawhney et al. 2022] Rohan Sawhney, Dario Seyb, Wojciech Jarosz,
Keenan Crane, “Grid-Free Monte Carlo for PDEs with Spatially Vary-
ing Coefficients”, ACM Trans. Graph., Vol. 41, No. 4, Article 53, July 2022,
pp. 53-1–53-17
[Schaback 2020] Robert Schaback, “On COVID-19 Modelling”, in Jahres-
bericht der Deutschen Mathematiker-Vereinigung, 2020, Jun 29, pp. 1–39
[Schönefeld 1977] Reinhold Schönefeld, Hybrid-Simulation, Akademie-Ver-
lag Berlin, 1977
[Schwankner 1980] Robert Schwankner, Radiochemie-Praktikum, UTB,
Schöningh, 1980
[Schwarz 1971] Wolfgang Schwarz, Analogprogrammierung – Theorie und
Praxis des Programmierens für Analogrechner, VEB Fachbuchverlag Leipzig,
1. Ed., 1971
[Sensicle 1968] Allan Sensicle, Introduction to Control Theory for Engineers,
Blackie & Son Limited, 1968
[Shileiko 1964] A. V. Shileiko, Digital Differential Analyzers, Pergamon Press,
The Macmillan Company, New York, 1964
[Simanca et al. 2002] Santiago R. Simanca, Scott Sutherland, Notes for
MAT 331 – Mathematical Problem Solving with Computers, The Univer-
sity at Stony Brook, https://www.math.stonybrook.edu/~scott/Book331/
331book.pdf
[Small 2001] James S. Small, The Analogue Alternative – The Electronic Ana-
logue Computer in Britain and the USA, 1930–1975, Routledge, London and
New York, 2001
[Smith? 2014] Robert Smith?, Mathematical Modelling of Zombies, University
of Ottawa Press, 2014
[Soroka 1962] Walter W. Soroka, “Mechanical Analog Computers”, in
[Huskey et al. 1962, pp. 8-2–8-16]
[Sprott 2010] Julien Clinton Sprott, Elegant Chaos – Algebraically Simple
Chaotic Flows, World Scientific Publishing Co. Pte. Ltd, 2010
[Stubbs et al. 1954] G. S. Stubbs, C. H. Single, Transport Delay Simulation
Circuits, Westinghouse, Atomic Power Division, 1954
[Sutton et al. 2018] Richard S. Sutton, Andrew G. Barto, Reinforcement
Learning – An Introduction, second edition, The MIT Press, Cambridge,
Massachusetts, London, England, 2018
[Sydow 1964] Achim Sydow, Programmierungstechnik für elektronische Analo-
grechner, VEB Verlag Technik, Berlin, 1964
[Telefunken/Ball] Telefunken, Demonstrationsbeispiel Nr. 5: Ball im Kasten,
Bibliography 289
[Yavetz 1995] Ido Yavetz, From Obscurity to Enigma – The Work of Oliver
Heaviside, 1872–1889, Birkhäuser Verlag, Basel, Boston, Berlin, 1995
[Yokogawa] Analog Computers Series 3300, Yokogawa Electric Works, Ltd., Cat-
alog No. YEW 3300A
[Zhan et al. 2016] Lusa Zhan, Yipeng Huang, “Analog Sorting – Theory
and evaluation”, https://yipenghuang.com/wp-content/uploads/2017/
03/analog_sorting.pdf, retrieved December 13th , 2022
Index
diode, 30 eigen-, 66
ideal, 80 natural, 66
Dirac, Paul Adrien Maurice, friction-wheel, 4
249 function
direct analogy, 2 delayed unit step, 248
division, 76 delta, 249
Double Scroll attractor, 151 excitation, 117
drag, 130 forcing, 117
drift coefficient, 238 generator, 30
Duffing oscillator, 161 harmonic, 56
DVM, 27, 30, 54, 275 inverse, 74
ramp, 249
EAI, 19, 31 special, 73
eigenfrequency, 66 step, 248
Electronic Associates Inc., 19, 31 transfer, 96
EMP, 103 unit step, 248
equation
Mathieu’s, 119 glider, 171
Volterra-Lotka-, 145 GND, 13
heat, 182 ground, 13
wave, 49
Euler spiral, 165 half-life, 51
Eurocard, 275 halt, 18, 22
excitation function, 117 HALT, 18, 22
exponentially mapped past, 103 harmonic function, 56
heat equation, 182
feedback, 51 heat transfer, 180
FET, 95 Heaviside, Oliver, 248
Feynman-Kac formula, 238 Hermitian matrix, 191
field effect transistor, 95 high gain amplifier, 16
field programmable gate array, 1 high performance computing, 1, 8
filter Hindmarsh-Rose model, 168
all-pass, 98 Hoelzer, Helmut, 7
low pass, 78 HPC, 1, 8
fire control system, 7 human-in-the-loop, 193
flow, 176 hybrid computer, 1, 32, 219
forcing function, 117 hysteresis, 85
four quadrant multiplier, 34
FPGA, 1 IC, 18, 21, 33
free elements, 24 ideal diode, 80
free potentiometer, 25, 59 impedance converter, 27
frequency indirect analogy, 2
294 Index
modulation, 86 passband, 78
mol, 54 patch
multiplier, 33 field, 1
four quadrant, 34 panel, 1
quarter-square, 33, 34 PDE, 49
two quadrant, 34 pendulum, 62
music, 210 damped, 117
phase
nabla operator, 127 diagram, 118
natural frequency, 66 space plot, 118, 125
negative feedback, 13 Philbrick, George A., 7
neuron, 168 phugoid oscillation, 171
neutron PID-controller, 236
delayed, 214 planimeter, 4
neutron kinetics, 214 plot
Nosé-Hoover oscillator, 157 phase space, 118, 125
plotter, 6, 23, 36
ODE, 49 plug
OP, 18, 22 banana, 39
opamp, 12 Poisson equation, 238
open amplifier, 16, 76, 177 polynomials, 77
open-loop gain, 13 Polyphemus, 7
operate, 18, 22 positive definite, 191
operational amplifier, 12 potential flow, 177
operator potentiometer, 25
del, 127 buffered, 27
nabla, 127 free, 25, 59
order, 49 loaded, 26
reaction, 110 servo, 39
ordinary differential equation, 49 setting, 27
oscillation unloaded, 26
phugoid, 171 POTSET, 27
Oslo differential analyzer, 6 powers, 77
overdamped, 68 predator-prey-system, 145
overload, 11 problem time, 20, 55
problem variable, 12, 54
Padé approximation, 98 program
partial differential equation, 17, 49, analog, 47
180 PT8, 28
particle Python, 222, 231
charged, 131
particular solution, 57, 77 quadrature generator, 57
296 Index
V2 rocket, 7 XIBNC, 25
van der Pol, 122 XID, 24
variable XIR, 24
machine, 12, 54
problem, 12, 54 Z-diode, 24
scaling, 12 Zener-diode, 24
velocity, 49 Zombie apocalypse, 144