Dealing With Electronics
Dealing With Electronics
Dealing With Electronics
Dealing with
Electronics
|
Physics and astronomy classification scheme 2010
01, 06, 07, 84, 85
Authors
Prof. Dr. Manfred Drosg
University of Vienna
Faculty of Physics
Boltzmanngasse 5
1090 Vienna
Austria
[email protected]
ISBN 978-3-11-033840-9
e-ISBN 978-3-11-034108-9
www.degruyter.com
|
I dedicate my share in writing this book to my dear wife Brigitte
for having gracefully accepted being married to a bigamist
who is enjoying her rival – science – too much.
Manfred Drosg
Preface
The roots of electronics, as of several other natural sciences, lie in physics. This is re-
flected, e.g., in the fact that at the Los Alamos National Laboratory the kernel of the
Electronic Division was previously a group (P-1) of Physics Division. In addition, the
knowledge of electronics is mandatory for any experimental physicist to get the most
out of his instruments, i.e., to succeed in performing complicated experiments. There-
fore, it should not surprise that an experimental physicist has a fresh approach to this
field of science. In particular, he would rather look for exact explanations rather than
for rules of the thumb. Surprisingly, these exact solutions do not necessarily involve
tedious calculations or higher mathematics. Already as a student in the sixties I built
a plug-in for the Digital Computing Unit of TMC effectively programming this “com-
puter” by hard-wire to control an electrodynamic Mössbauer drive.
There did not exist a textbook teaching how to proceed. I got first-hand knowl-
edge from manuals, the hard way. However, this had the advantage that my electronics
knowledge accumulated not in the conventional way, always emphasizing a physical
approach to this subject. The wealth of electronics insights gained as a doctoral stu-
dent of experimental nuclear physics made it possible to co-author the Textbook of
Nuclear Electronics (Lehrbuch der Nuklearelektronik) in the late 1960s. Decades of
teaching Electronics for Physicists (both by lecturing and giving a practical course)
and several publications in this field prepared me for this attempt to strip electronics
of all unnecessary garments and to show its rather simple and beautiful bare struc-
ture.
By constantly referring to duality only half of the facts must be dealt with. Besides,
this kind of mirroring helps in understanding of circuits. The quantity two shows up in
many facts on electronics; it is a characteristic both for analog and binary electronics.
The basic ingredient is the dualism between current and voltage reflecting the elec-
tromagnetic nature of electrical signals. There are the Fourier transform duals time
and frequency. Dualism in time and space is also based on Maxwell’s equation. Then
there is the geometric dualism in arranging electronic elements either in parallel or in
series. Also, the existence of positive and negative charge reflects a twin character re-
sulting in complementary semiconductor devices. Further, an electric signal may flow
either way. Finally, the finite size of the maximum current or voltage results in two
distinct electrical states which are the basis of binary (digital) electronics. In digital
electronics there is parallel and serial logic, and truth tables deal either with standard
(“positive”) or inverse logic operators. Then, harmonic signals are characterized by
two properties: amplitude and phase. Even (electronic) measurements have two mir-
ror faces: the measurement itself and the calibration of the instrument. Finally, there
is the ambiguous duality between binary circuits and binary logic.
Taking full advantage of these dualisms the amount of knowledge to cover most
of the field can be drastically reduced. All variations of known basic circuits can be
viii | Preface
arrived at by applying the simple laws underlying these dualisms, and the complete-
ness of each circuit family can be proved. So, three-terminal components have just one
basic circuit, the other two can be explained by feedback. For systematic reasons the
reverse closed-loop gain in feedback circuits is introduced, the only way to describe
the output impedance correctly. Such a science of electronics can be an inspiring com-
plement to the arts of electronics that is very difficult to teach because it can only be
acquired by doing.
This book does not only cover the basics of analog and digital electronic circuits,
but also gives full attention to important principles like analog-to-digital conversion
(the basis for electronic measurements) and positive and negative feedback.
The cooperation with my young colleague Michael Morten Steurer who gained
proficient electronic expertise by designing unique research equipment for biological
experiments has proven to be very successful. He contributed several subjects to this
book and eliminated a few mistakes. Above all, he excelled in composing the more
than 300 figures found in this book supporting the process of understanding.
Myth 2. Voltage is the primary electric parameter and voltage gain is the ultimate aim
of amplification.
There are two reasons why, in practice, voltage is “more liked” than current. Firstly,
voltages occur across electronic components and are therefore easily accessible. Sec-
ondly, batteries which are readily available, are natural voltage sources. This prefer-
ence of voltage is reflected in the fact that operational amplifiers are commonly voltage
amplifiers. However, as the amplification of signal power is the aim of any amplifier,
current gain is as important as voltage gain because power is the product of current
and voltage.
Myth 3. To realize a properly working electronic system all components must be supplied
by the same supplier. Besides, they should be as expensive as affordable.
The signals between electronic devices are not supplier specific but their properties
are easily understood so that mixed electronic systems can be at least as good as such
delivered by one supplier. Increasing the accuracy beyond the need (e.g., measuring
a resistance that must be known to 10% with a 1%-instrument) is, in general, a very
expensive practice.
Myth 9. Since the arrival of circuit simulation programs there is no more need of under-
standing circuits.
Simulating tools can be a great help for an expert by sparing lots of time. The amount
of knowledge needed to use a simulation program efficiently is certainly greater than
any rookie can provide.
Myth 10. In teaching electronics active two-ports should be used from the very begin-
ning because with (passive) one-ports (resistors etc.) you cannot learn anything of im-
portance.
For very good students the method of teaching is quite irrelevant. It seems to be good
practice to start any building of knowledge with the foundation.
Contents
Preface | vii
2.2.2.5 Summary | 41
2.2.3 Transforming a two-port into a one-port | 42
2.2.4 Amplification (power gain) | 42
2.2.4.1 Optimum power transfer (power matching) | 43
2.2.4.2 Amplification over several stages | 45
2.2.5 One-port used as two-port | 46
2.2.6 Three-terminal element used as two-port | 47
2.2.7 Passive two-ports | 52
2.2.7.1 Attenuators | 53
2.2.7.2 Nonlinear passive two-ports (clipping) | 55
2.3 Real active two-ports (amplifiers) | 60
2.3.1 Maximum limits for voltages and currents | 60
2.3.2 Characteristics of two-ports | 62
2.3.2.1 (Electronic) switches | 64
2.3.3 Setting the operating conditions (biasing) | 66
2.3.4 Classification of amplifiers according to the operating point | 66
2.3.4.1 Biased amplifiers (long-tailed pair) | 67
2.3.4.2 Comparators | 69
2.3.5 Amplifiers as two-ports | 70
2.3.5.1 Fully differential amplifier | 71
2.3.5.2 Operational amplifier | 73
2.4 Static feedback | 74
2.4.1 General accomplishments by negative feedback | 76
2.4.1.1 Improving the stability | 78
2.4.1.2 Feedback over two stages | 78
2.4.1.3 Improving the linearity | 79
2.4.1.4 Improving the noise immunity | 79
2.4.2 Static positive feedback | 80
2.4.3 Static feedback in circuits | 81
2.4.3.1 Impedances of two-ports with external feedback | 83
2.4.3.2 Other dynamic impedances | 86
2.4.3.3 Transfer properties of two-ports with external feedback | 90
2.4.3.4 Summary of feedback actions on two-ports with external
feedback | 94
2.4.3.5 Feedback in circuits with three-terminal components | 95
2.5 Operation amplifiers | 100
2.5.1 Inverting operation amplifiers | 101
2.5.1.1 Summing amplifier | 104
2.5.1.2 Nonlinear amplifiers | 104
2.5.1.3 Active voltage clipping (voltage limiter) | 105
2.5.2 Noninverting operation amplifiers | 106
2.5.2.1 Voltage follower | 107
Contents | xiii
Solutions | 365
Index | 383
1 Preparing the ground
Electronics deals with the generation, transmission, modification, measurement, and
all kinds of applications of electrical signals. Analog electronics concentrates on the
shape (amplitude) of the signal whereas digital electronics uses signals of standard-
ized shape to perform “logical” operations.
In both fields, electronic circuits are used. Therefore, it is necessary to “under-
stand” such circuits. This is done by means of models, usually by way of (circuit) di-
agrams that contain all (and only) the essentials that would allow a knowledgeable
person to build such a network.
Basically, there are three scopes when dealing with electronics: understanding,
designing and building of circuits (networks). This book concentrates on understand-
ing. For the design it is helpful to use reference circuits. Building of (advanced) elec-
tronic circuits is done by the industry and not so much by individuals any more. Such
individuals can be compared with artists: their knowledge is partly unconscious, i.e.,
it is based on long practical experience. Although the construction of circuits is out-
side the scope of this book, there will be several practical hints on how to overcome
frequently encountered problems.
Since the middle of the last century, everyday life has become more and more depen-
dent on devices based on electromagnetism, and more recently on optoelectronics.
The general science on which these devices depend is called electronics. Electronic
systems have become so complicated that a user cannot be expected to be an expert
in electronics.
A user usually requires a system that performs the task he wants it to perform.
Hardly any driver of a rental car will bother, e.g., what type of clutch the rented car has.
All the driver wants is that certain relevant parameters (breaks, steering, acceleration,
all lights, etc.) are the way the user needs them. Technical details are of little concern.
An example more to the point would be a mobile phone. There, the system is
rather complicated because more than one antenna receives the signal of the hand
set. Consequently, a computer will be involved to connect the appropriate receiving
station with the appropriate sending antenna. Although it is an exceedingly sophisti-
cated electronic system, all a user has to know about electronics is that the battery of
his mobile phone must be charged regularly.
Mobile phones are a ready example of a complex hybrid system. In Figure 1.1, such
a system is broken down into its three basic subsystems:
– a (source) one-port in which the conversion of sound waves into electrical signals
takes place (called microphone, having the function of a transducer),
2 | 1 Preparing the ground
– an electronic two-port that conditions the electric signals and transports them to
– the load one-port that converts electric signals back into sound waves (called
loudspeaker, having the function of an actuator).
The generation of electric signals from other physics quantities (here acoustic ones)
is not subject of this book because this subject belongs in the field of transducers,
neither is the use of electric signals to drive optical, mechanical, thermal, or other
devices which would be covered by a book on actuators.
When one starts breaking down this system into devices (hand sets, antennas,
computers, etc.), one needs a much greater electronic knowledge to understand their
function. Within these devices, there are electronic circuits and these are built from
electronic components. To understand the function of electronic components you fi-
nally will need physics to answer your questions. So we have a hierarchy from the sim-
ple to the very complex as shown in Figure 1.2. Depending on where you stand in this
hierarchy a different approach is appropriate. In this book, we mainly deal with cir-
cuits and networks and in some cases with electronic devices. We are not going to in-
dulge in the physical properties of electronic components which are covered in a large
number of books and manuals. However, we need the knowledge of basic electrical
properties to understand circuits. More complex systems require specialized knowl-
edge and cannot be dealt with in a general way.
General Physics
Electronic Component
Circuit
Network
Fig. 1.2. Hierarchy in electronics. In this book, we mainly
deal with circuits and networks and in some cases with elec-
Instrument (Device)
tronic devices. However, we need the knowledge of basic
electrical properties of electronic components to under-
System
stand circuits.
1.2 Basics of electricity | 3
The central physical quantity in electronics is electric (signal) power 𝑝(𝑡). It is not cur-
rent 𝑖(𝑡) or voltage 𝑣(𝑡), even if these quantities are the practical basis for determining
the electric power according to (1.1)
meaning that at each moment 𝑡 the electric power 𝑝 is the product of voltage 𝑣 times
current 𝑖. As we are dealing with electronic principles, we will not deal with high power
electronics, nor with very low power electronics. These fields require specific practical
knowledge which is outside the scope of this book.
Following a generally accepted practice
– capital letters are used for the symbols of quantities that do not change in time 𝑡
(current 𝐼, voltage 𝑉, power 𝑃) and
– lower case symbols for quantities that vary in time (𝑖(𝑡), 𝑣(𝑡), 𝑝(𝑡)), i.e., they are
functions in time.
Problem
1.1. Satisfy yourself that for all electronic instruments that come to your mind power
is needed to fulfill the task of this instrument (data transmission, display, computing,
mobile phone, etc.). Name electric instruments which do not need auxiliary power to
fulfill their task.
Even if electronic measurements involve primarily voltage or current, such devices ac-
tually measure power. However, they are destined either to be used as voltage meter or
as current meter. Thus, for practical purposes, voltage and current are the basic physi-
cal quantities that count in electronics. Both quantities can be derived from one natu-
ral phenomenon, the existence of the electrical charge 𝑄. Actually, there are two types
of charge, the negative charge and the positive charge. Charge is always the attribute
of a particle and consequently quantized. The ordinary electron has one elementary
negative charge unit. Positive charge in electronics is the result of missing negative
charge, e.g., when an atom has lost a (valence) electron.
The unit of electric power is 1 W(att) which is 1 J(oule)/s(econd). The unit of the
electric charge 𝑄 is 1 C(oulomb). It takes charge of about 6.2 × 1018 electrons to give
1 Coulomb of negative charge. Moving charge is called current 𝑖, i.e., current is charge
per time. The unit of current is 1 A(mpere) which is 1 C/s.
The energy (power 𝑝(𝑡) times time 𝑡) contained in a charge is called voltage 𝑣
(or potential difference). Its unit is 1 V(olt) and 1 V = 1 J/C. For electronic purposes,
voltage can be viewed at as the potential difference between two points constituting
an electromotive force that can displace electric charge, i.e., it can enable a current
flow. In circuit analysis, the polarity of voltage sources can be assumed at will. When
4 | 1 Preparing the ground
easily feasible it will be so chosen that it conforms to the assumed direction of current
or vice versa.
A basic property of electric charge is that it does not vanish, it is conserved. This
is not only so in particle physics but is essential for Maxwell’s equations which govern
the propagation of electromagnetic signals in space and time. In particular, they show
how electric and magnetic fields are interwoven into each other. In circuit theory, we
do not have to exploit fully the beauty of these equations. Because of the relatively
small size of circuits usually there is no need to pay heed to the distance (space) cov-
ered by the electronic signal. Signal propagation time is in most cases not considered.
(Remember: light covers about 30 cm in 1 ns.) In steady-state electronics, not only the
space but also the time dependence is disregarded.
Consequently, we will first concentrate on the easiest cases, namely those where
the electric quantities are constant in time and space. These are commonly called
DC (direct current) responses. Later on, we will allow changes in time, i.e., we will
consider frequency (steady-state) responses and transient responses. Only in special
cases we will cover the propagation of electric charge in space.
As there exist two types of electric charge, it became necessary to discriminate be-
tween them. They were called positive and negative charge rather arbitrarily. Charges
of the same kind repel each other, opposite charges attract each other. If there is a
surplus of one type of charge carriers at one terminal and a deficiency at the other,
we deal with a current (or voltage) source. If as a result of this unbalance, charge is
moved between the two terminals, electric current flows.
Conventional current flow is (for historical reasons) from a positive potential to a less positive
(negative) potential.
However, electrons, which are the usual carriers of the electric charge, move the oppo-
site direction (being attracted by the positive potential). Actually, one may assume the
direction of current flow at one’s will as long as one maintains consistency all over the
electrical network. If current flows in the assumed direction, it is positive. If the anal-
ysis yields a negative current, it just means that you should have made the opposite
assumption of a positive current.
Let us start out with one-ports, i.e., devices with two terminals. It makes sense
to choose the direction of current flow into a one-port such that it is from the higher
potential to the lower potential (see Figure 1.3). Thus, the direction of the assumed
current flow in a dissipative element, i.e., an element that absorbs power, makes the
dissipated power positive, as it should be. However, current delivered by a source has
a sign opposite to the sign of its voltage resulting in a negative power; a source does
1.3 Helpful basic procedures | 5
𝑖
+
𝑖 𝑣
Fig. 1.3. The current flow in a one-port is defined from the terminal of
higher potential to the terminal of lower potential. The power 𝑝(𝑡) =
− 𝑣(𝑡) × 𝑖(𝑡) of the element is positive for dissipative (passive) elements
𝑖 and negative for active elements.
not consume but supplies power. For that reason, sources are called active elements.
However, when charging an accumulator (battery), the direction of current is reversed
so that power is consumed when it is loaded, i.e., such a source behaves dissipative
and is in this operational state a passive element.
Many devices function irrespective of the actual direction of the current flow. De-
vices that respond differently to currents of different direction are called polarized de-
vices. The arrow in a symbol of a polarized device indicates the direction of the con-
ventional current flow.
Problem
1.2. (a) How would you find out that in an accumulator being loaded power is dissi-
pated?
(b) Find out for yourself, why you may charge accumulators but not regular batteries.
Electrical quantities cover a huge range of values. To make the numbers better read-
able prefixes to the units have been introduced, in steps of 103 . When doing electronics
it is unavoidable to get familiar with these prefixes (Table 1.1). In addition, we need the
prefix d (deci), i.e., 10−1 .
Exa E 1018
Peta P 1015
Tera T 1012
Giga G 109
Mega M 106
Kilo k 103
Milli m 10−3
Micro μ 10−6
Nano n 10−9
Pico p 10−12
Femto f 10−15
Atto a 10−18
cuits, there might be components in the diagram indicating circuit intrinsic (natural)
properties that are not presented by lumped components in the circuit.
It should be clear that the location of a component in a circuit diagram has no
relationship to the location of its physical counterpart in the actual circuit. We can re-
arrange components in the schematic as long as the connections between components
stay the same as in the circuit.
It is very good practice to have a clear layout in circuit diagrams and to arrange the elements in
such a way that the quiescent current flows from top to bottom, i.e., the voltages get smaller and
smaller toward the bottom line.
Linearization is, like in other sciences, the basis of easy understanding of electronic
circuits. In addition, there are very helpful fundamental laws that alleviate life, e.g.,
the concept of closed current loops, the existence of a node or the principle of feed-
back. By restriction to small-signal response, nonlinearities can be avoided.
It is easy to bring an electronic circuit, shown in a schematic diagram, to life if the
circuit is simple. A lot of know-how (and experience) is needed to do so with demand-
ing circuits. Such practical designs are sometimes a piece of art.
1.3.3 Linearization
Usually, only linearized science is considered because of the easier access to it for
the human mind. The same is true for our approach to electronics. At the beginning,
we restrict ourselves to linear electronics so that the essentials are not fogged up by
1.3 Helpful basic procedures | 7
Although these one-ports can be collated to actual electronic components, we will wait
to do that until later when we have introduced duality. Thus, the amount of detailed
knowledge can be reduced by a factor of two. Besides, we will restrict ourselves to lin-
ear one-ports which allow the basic understanding of practically all circuits. However,
it is not too early to stress one fact that is easily forgotten when overwhelmed by many
new insights:
Any electronic component retains its intrinsic properties independent of its (momentary) use, un-
less it is broken.
1.3.4 Duality
Although duality has been known for more than a century, it is rarely applied despite
the fact that it reduces the number of circuits and of laws governing electronics by half,
and can be a great help in understanding the circuits. From Maxwell’s equations, the
symmetry between electric field and magnetic field is obvious. This translates into the
duality between voltage and current, i.e., for each component, circuit, equation (law)
that deals with current there is an equivalent component, circuit, equation (law) with
the voltage as characteristic property.
Current and voltage are equivalent electric quantities, they are dual to each other.
For current, the three basic one-ports have the following properties:
active element dissipative element reactive element
d𝑣(𝑡)
𝑖(𝑡) = 𝑖S (𝑡) 𝑖(𝑡) = 𝐺 × 𝑣(𝑡) 𝑖(𝑡) = 𝐶 × (1.2)
d𝑡
independent of 𝑣(𝑡)
with the source current 𝑖S , the conductance 𝐺 (unit: 1 S(iemens)) and the capaci-
tance 𝐶(unit: 1 F(arad)). From duality follows
active element dissipative element reactive element
d𝑖(𝑡)
𝑣(𝑡) = 𝑣S (𝑡) 𝑣(𝑡) = 𝑅 × 𝑖(𝑡) 𝑣(𝑡) = 𝐿 × (1.3)
d𝑡
independent of 𝑖(𝑡)
8 | 1 Preparing the ground
with the source voltage 𝑣S , the resistance 𝑅 (unit: 1 Ω (Ohm)), and the inductance
L(unit: 1 H(enry)). Duality makes 𝑖S into 𝑣S , 𝐺 into 𝑅, and 𝐶 into 𝐿. Therefore, the num-
ber of basic relations describing linear circuit elements is reduced from six to three.
Among above relations
𝑣(𝑡) = 𝑅 × 𝑖(𝑡) (1.4)
is known as Ohm’s law. It provides a linear relation between voltage 𝑣 and current 𝑖.
From (1.2) and (1.3) it is clear that the conductance 𝐺 is dual to the resistance 𝑅. As both
the conductance and the resistance are realized by the same physical element, called
resistor, this element is self-dual. This self-duality of the resistor is the reason why a
conductor is rarely considered as physical element. However, for duality reasons it is
−𝐼 𝑉
a b
𝑉 −𝐼
𝑉S 𝐼S
𝐼 𝑉
c d
𝛥𝐼 𝛥𝑉
𝛥𝑉 𝛥𝑉
𝛥𝐼
𝑅= 𝐺=
𝛥𝑉 𝛥𝐼 𝛥𝐼 𝛥𝑉
𝛥𝐼
𝑉 𝐼
Fig. 1.4. Characteristics of an ideal voltage source (a), an ideal current source (b), a linear resistance
(c), and a linear conductance (d). Note that in order to match the physical direction of the currents,
the currents’ signs are inverted in the characteristics of the active elements (sources).
1.3 Helpful basic procedures | 9
Dualities
essential to accept the concept of having resistance and conductance side by side. The
dual version of Ohm’s law 𝑖(𝑡) = 𝐺 × 𝑣(𝑡) is often disregarded because the physical
element representing the conductance is the resistor. Another self-dual element is the
switch (Section 2.3.2.1).
Although just one parameter describes a linear device, we are introducing the
term characteristic already at this point. A characteristic of an electronic element is
the graphical presentation of the relation between two variables. A linear relation ob-
viously results in a linear characteristic, as shown in Figure 1.4 for several linear rela-
tions between current and voltage. For the beginning, we do not consider signals that
change in time. Therefore, the following discussions disregard reactive elements.
Table 1.2 summarizes the essential features of duality between current and volt-
age.
Observe, that the (real) load is a special case of the complex load. Consequently, the term impe-
dance is more general than resistance and admittance more general than conductance. Thus, the
terms resistance and conductance are only used when it is important to stress the difference.
From the fact that current is electric charge in motion directly follows that
d𝑞 = 𝑖 × d𝑡 = 𝐶 × d𝑣 ,
10 | 1 Preparing the ground
and by duality
d𝛷 = 𝑣 × d𝑡 = 𝐿 × d𝑖 .
From Ohm’s law we have
d𝑣 = 𝑅 × d𝑖 .
Thus, the three basic electronic components 𝑅, 𝐿, and 𝐶 which we introduced in Sec-
tion 1.3.4 can be defined in a different way. In 1971, it looked as if Leon Chua has discov-
ered a missing link among these three basic electronic components. Into this scheme
of the electronic proponents voltage, current, electric charge, and magnetic flux
d𝑞 = 𝐶 × d𝑣
d𝑣 = 𝑅 × d𝑖
d𝑖 = (1/𝐿) × d𝛷
the relation
d𝛷 = 𝑀 × d𝑞
fits perfectly well. The up to then unknown property 𝑀 was called memrisistivity, and
it was assigned to a component named memristor. From the identity formed by these
four relations one gets
𝐿
𝑀= ,
𝑅𝐶
i.e., memrisistivity has the dimension of a resistance. Memristivity is a resistance with
memory effect, i.e., its value depends on the amount of current that has flown through
it. Much more recently, such a device has actually been built. Later, also capacitors
with memory effect have been built. This suggests that components with memory ef-
fect fit into some other scheme than suggested, i.e., the suggestion of a missing link
now appears in a different light. Components with memory effect have not been used
much if at all. For that reason, we are going to pay no further heed to them.
The attributes small, large, slow, fast, etc., of signals or parameters are practically
always relative, i.e., in comparison to something specific. For instance, if an output
impedance is called small it means that it is much smaller than the impedance of its
burden, the load impedance, and a small load impedance means that it is a heavy
burden to the source, i.e., it is much smaller than the output impedance of the source.
A special case is the small-signal. It means that a specific circuit behaves linearly
toward this signal. It could be a voltage signal of 100 V or of 10 mV, i.e., a specific
signal may be considered as small-signal for one circuit, but not for the other.
Unfortunately, scholars of electronics give too much weight to rules of thumb.
Many electronic terms can only be viewed as such, e.g., if a battery (a voltage source)
1.3 Helpful basic procedures | 11
is burdened by a load with an impedance (much) smaller than its own impedance it
behaves like a current source. As this is a very rare situation, it is justified to call a
battery a voltage source. However, one should always be open to the fact that there
will be situations in which the generally used term may be misleading.
Problem
1.3. Is a resistance value of 100 Ω small or large?
2 Static linear networks
A functional arrangement of electronic elements is an electronic circuit. An assembly
of linear circuits forms a linear network. At this moment we disregard any time depen-
dence in the behavior of electronic circuits. We just consider the momentary status of
the electric variables current and voltage. We could state as well that we are only con-
sidering the DC (direct current) behavior of the networks.
2.1 One-ports
An electronic element with two terminals (leads) is called one-port. To understand its
properties, we need to know the voltage 𝑣(𝑡) between the terminals (across the one-
port) and the magnitude and direction (sign) of the current 𝑖(𝑡) through the one-port
(Figure 1.3).
Using ideal (linear) components helps us to concentrate on the essentials. Table 2.1
summarizes the six ideal linear one-ports.
Exchanging current with voltage according to duality halves the number to three
ideal linear one-ports. One must realize that one can choose freely either the current
one-ports or the voltage one-ports when trying to understand a circuit. Usually, one of
these choices will give an easier approach to the understanding of the property of a
circuit. With some experience, it will be quite obvious which to choose, e.g., in a loop
the current will have more importance, in a parallel arrangement the voltage.
When trying to understand a circuit one is free to choose the voltage or the current version; one of
them will be more appropriate for the problem at hand.
For the beginning, we restrict ourselves to sources and dissipative elements, i.e., to
four elements.
2.1 One-ports | 13
Table 2.1. Family of ideal linear one-ports and their symbols as used in circuit diagrams.
+
−
Combining a one-port with an ideal current source can only be done sensibly in parallel.
Problem
2.1. Combining an ideal current source in series with another one-port:
(a) Place alternately an ideal voltage source, a resistor, or an ideal current source in
series to an ideal current source and find out, why none of these combined one-
ports makes sense.
(b) Why is there no sense in arranging any one-port in series to an ideal current
source? (How does the augmented one-port act differently from the ideal one?)
Combining a one-port with an ideal voltage source can only be done sensibly in series.
14 | 2 Static linear networks
There is no sensible way of combining an ideal voltage source with an ideal current
source, i.e. one cannot add a voltage to a current which nobody would suggest anyway.
One can only combine ideal sources of the same kind. Generalizing one gets:
Problem
2.2. Combining an ideal voltage source in parallel with another one-port:
(a) Place alternately an ideal voltage source, a resistor, or an ideal current source in
parallel to an ideal voltage source and find out, why none of these combined one-
ports makes sense.
(b) Why does it not make sense to arrange any one-port in parallel to an ideal voltage
source? (How does the augmented one-port act differently from the ideal one?)
A serial arrangement of one-ports is dual to the parallel arrangement of their dual counterpart.
Usually, electronic engineers appear to prefer working with voltages rather than with
currents. Why is that so? To have direct access to current one must place the sensing
device (the meter) into the loop, i.e. one must open the loop in which the current flows
because one must arrange the meter in series. A voltage measurement can be done in
parallel, i.e. one can just put the terminals of the voltmeter to the two spots between
which the voltage should be measured. The development of current probes has less-
ened the importance of voltage measurements because the current is measured by the
magnetic field accompanying it. In that case, the loop stays intact.
Based on the fact that ideal sources are unchangeable in their output signal the fol-
lowing two principles can be used for simplifying circuits.
1. Circuit components that do not affect the functionality of a circuit may be removed from this
circuit.
2. Under the presupposition that a voltage between two points (a current through a junction) is
fixed, i.e., it has an invariable value, then for the analysis of the network, an ideal source can
be used to replace that part of the circuit that provides the invariable electrical signal.
2.1 One-ports | 15
An ideal source is characterized by its property to maintain at its output its electrical
signal independent of the load, i.e.
– the ideal current source will deliver the current 𝑖S into any circuit connected to it,
and
– the ideal voltage source will maintain 𝑣S at its leads independent of the load con-
nected to it.
a 𝑅 b
+
𝑣S 𝑖S 𝐺
−
Fig. 2.1. Real linear sources, a combination of an ideal voltage source with a resistor 𝑅 (a), and an
ideal current source with a conductor 𝐺 (b).
16 | 2 Static linear networks
9 kΩ 18 kΩ
+
+
24 V − 18 kΩ 𝑣o
Any combination of linear one-ports, i.e. any linear electric network is between two arbitrarily cho-
sen points, electrically equivalent to a real linear source, by choice either a linear voltage source
(Thevenin) or a linear current source (Norton).
These theorems only state that such a replacement can be done, but not, how it is
done. Two linear dependences are identical if two independent parameters of them
agree. There are three parameters that usually can be accessed easily between the
two points in the circuit: open-circuit voltage, short-circuit current, and impedance.
Which pair is the easiest to use cannot be said but must be guessed to reduce the
calculational effort.
Problems
2.3. Apply Thevenin’s theorem to a real linear current source. What is the difference
in behavior between a real linear voltage source and a real linear current source?
2.4. A linear network has a voltage of 10 V between two terminals. When these ter-
minals are short-circuited a current of 10 mA flows through this short-circuit. Replace
this linear network between these terminals according to Norton’s theorem.
2.5. Replace the linear circuit of Figure 2.2 at the output terminals
(a) by a real voltage source (Thevenin’s theorem), and
(b) by a real current source (Norton’s theorem).
(c) Determine 𝑣o in both cases.
It depends on the problem at hand whether the preferred electrical quantity is voltage or current,
i.e. whether a current or a voltage source is used for the substitution of a linear network.
Kirchhoff’s first law is based on the conservation of electric charge. It can be formulated
as follows
The sum of all currents flowing into a node (junction) equals the sum of currents flowing out of this
node.
2.1 One-ports | 17
−
+ 𝑣1
+−
𝑣7
𝑣2
−
𝑖7 +
𝑖1 +
𝑖6 −
𝑖2 𝑣6
𝑖5
−+
𝑣3
𝑖4 𝑖3
𝑣5
+
− 𝑣4
+ −
Fig. 2.3a. Explaining Kirchhoff’s first law. Fig. 2.3b. Explaining Kirchhoff’s second law.
To arrive at this simple equation, it was taken advantage of the fact that voltage like
current are signed quantities.
This is nothing really new because we came already across the duality between short-
circuit and open-circuit as special cases of node and loop. (As a short-circuit is the
smallest reasonable node with 𝑘 = 2, an open-circuit is for duality reasons the small-
est loop.) The following prescription, already given before, is confirmed by Kirch-
hoff’s laws. As already shown, voltages are added in series, currents in parallel. This
is demonstrated in Figure 2.4.
18 | 2 Static linear networks
𝑣3 + 𝑖0 = 𝑖1 + 𝑖2 + 𝑖3
−
+ 𝑖1 𝑖2 𝑖3
𝑣2 𝑣0 = 𝑣1 + 𝑣2 + 𝑣3
−
+ 𝑖0 = 𝑖1 + 𝑖2 + 𝑖3
𝑣1
−
+ + +
𝑖S1 𝑖S2 𝑖S1 𝑣S2 𝑣S1 𝑣S2
− − −
+
𝑖S 𝐺 𝑣S 𝐺 𝐺1 𝐺2
−
Fig. 2.5. Six possibilities
of arranging sources and
dissipative elements in
parallel.
2.1 One-ports | 19
the current of one source cannot flow through the other source, i.e.
This finding is consistent with the slope of the characteristic of an ideal current source
in Figure 1.4.
Two ideal current sources can be replaced by one ideal source with a total source
current 𝑖S
𝑖S = 𝑖S1 + 𝑖S2 (2.3)
Having two ideal voltage sources in parallel is in contradiction to the definition
that an ideal voltage source maintains the voltage at its terminals independent of the
load. There cannot be two (different) voltages at the common terminals of the com-
bined one-port. This fact has practical implications. Without specific precautions, bat-
tery cells (which in many cases come close to ideal voltage sources) may not be wired
in parallel.
A combination of an ideal voltage source in parallel to an ideal current source
makes the current source redundant. The voltage at the common terminals is deter-
mined by the voltage of the voltage source and is independent of the current delivered
by the current source. The same ruling applies to a resistor in parallel to an ideal volt-
age source.
As discussed above, a conductance in parallel to an ideal current source consti-
tutes a “real” current source. Applying Thevenin’s theorem, it may be replaced by a
real voltage source.
Finally, one can place two dissipative elements in parallel. In such a case, it is
more favorable to view them as conductors because the two conductances 𝐺1 and 𝐺2
can easily be combined to one.
𝐺 = 𝐺1 + 𝐺2 (2.4)
𝑅 = 𝑅1 + 𝑅2 . (2.6)
20 | 2 Static linear networks
+
𝑣S2 𝑖S2 𝑖S2
−
+ +
𝑣S1 𝑣S1 𝑖S1
− −
𝑅 𝑅 𝑅2
+
𝑣S 𝑖S 𝑅1
−
Fig. 2.6. The six dual counterparts of the arrangements shown in Figure 2.5.
This finding is consistent with the slope of the characteristic of an ideal voltage source
in Figure 1.4.
𝑣S = 𝑣1 + 𝑣2
according to Kirchhoff’s second law. After adding 𝑅1 and 𝑅2 to the combined re-
sistor 𝑅 one gets for the current (Ohm’s law, Section 1.3.4)
𝑣S
𝑖S = .
𝑅
(b) An ideal current source with the current 𝑖S is burdened by two conductors 𝐺1 and
𝐺2 in parallel. The current 𝑖1 flows through 𝐺1 , 𝑖2 through 𝐺2 . Then
𝑖S = 𝑖1 + 𝑖2
2.1 One-ports | 21
𝑖S
𝑣S = .
𝐺
𝑣𝑗 𝑅𝑗
= 𝑛 . (2.8)
𝑣t ∑𝑗=1 𝑅𝑗
For the case 𝑛 = 2, i.e. two resistors in series, this becomes the basic voltage divider
equation
𝑣1 𝑅1
= . (2.9)
𝑣t 𝑅1 + 𝑅2
Problems
2.6. An ideal current source of 10 mA feeds a resistor of 1 kΩ in series to a resistor of
10 kΩ.
(a) Give the current through the second resistor.
(b) Give the voltage across the first resistor.
𝑅S 𝑖L
+
𝑣S 𝑅L
−
2.7. An ideal voltage source of 10 V, an ideal current source of 10 mA, and a resistor
of 10 kΩ form a loop. Give the voltage across the resistor.
2.8. A chain of five resistors of 1 kΩ each, lies in parallel to an ideal voltage source of
8 V.
(a) simplify the circuit by combining the upper four resistors.
(b) Determine the voltage across the bottom resistor.
(c) The divided voltage is used as a real voltage source. Apply Thevenin’s theorem
and find the source voltage and impedance.
(d) Replace the real voltage source of (c) by a real current source using Norton’s the-
orem.
2.10. Write down the equations for current division, using resistors instead of con-
ductors.
2.12. An ideal voltage source with 𝑣S is shunted by four resistors, having an impe-
dance of 0.1 kΩ, 1 kΩ, 10 kΩ, and 100 kΩ, respectively.
(a) Give the voltage across the 1 kΩ resistor.
(b) Give the total current delivered by the source.
Recommendation:
– Current sources contained in a loop should be converted to voltage sources ap-
plying Thevenin’s theorem. The voltage sources not under consideration must be
replaced by their impedances. Then the individual voltage contribution can be
obtained by voltage division between the impedance of the voltage source under
consideration and the sum of all the other impedances.
– Voltage sources connected at a node should be converted to current sources ap-
plying Norton’s theorem. The current sources not under consideration must be
replaced by their admittances. Then the individual current contribution can be
obtained by current division between the admittance of the current source under
consideration and the sum of all the other admittances.
In circuit analysis a voltage source (e.g., a power supply) that is independent of the signal source
may be replaced by a short-circuit, i.e. both terminals of a grounded power supply act as signal
ground.
This means, in practical applications, that the signal path through a voltage power
supply must not have any appreciable parasitic impedance for the signal (from the
wiring etc.).
Example 2.2 (Replacing a voltage divider by a real voltage source using Thevenin’s
theorem). Some (input) voltage 𝑣i is reduced by voltage division to (the output volt-
age) 𝑣o . To this end, it is applied to two resistors 𝑅1 , and 𝑅2 arranged in series. Like
any other linear network, this network may be replaced by a real voltage source.
– The quantity of interest is 𝑣o . In our case it is the voltage across 𝑅2 . Therefore, this
voltage must be retained in the simplified circuit.
– A linear network, represented by a linear 𝑢–𝑖-function (i.e. a straight line), needs
two independent quantities for its description. One may choose between three
convenient ones: open-circuit voltage, short-circuit current, and (output) resis-
tance (conductance).
– One choice is obvious: the variable of interest is 𝑣o which is the open-circuit volt-
age.
Applying Kirchhoff’s second law we get 𝑣i = 𝑣1 +𝑣2 . From Ohm’s law, we get 𝑣1 = 𝑖×𝑅1
and 𝑣2 = 𝑖 × 𝑅2 . Thus, 𝑣i = 𝑖 × (𝑅1 + 𝑅2 ), and the voltage division is
𝑣o 𝑣2 𝑖 × 𝑅2 𝑅2
= = = (2.10)
𝑣i 𝑣i 𝑖 × (𝑅1 + 𝑅2 ) 𝑅1 + 𝑅2
As we need the value of the resistor 𝑅 of the real voltage source, we choose the (out-
put) resistance as the second independent property. Let us have a look at Figure 2.8.
24 | 2 Static linear networks
a b
𝑅1 𝑅1
c 𝑅
𝑣i +
−
+ + +
𝑣o 𝑣o 𝑣o + 𝑣o
𝑅2 𝑅2 −
− − −
Fig. 2.8. (a) Circuit diagram of the voltage division. (b) Parallel configuration of 𝑅1 and 𝑅2 when
viewed from the output. (c) The real voltage source.
In Figure 2.8a, the voltage 𝑣i is symbolized by an ideal voltage source of that value. Ac-
cording to the superposition theorem this source is equivalent to a short-circuit when
viewed from the output as shown in Figure 2.8b. Consequently, 𝑅1 and 𝑅2 are in par-
allel. And for parallel resistors their conductance (their inverse) is added to give the
total conductance
1 1 1 𝑅 + 𝑅1
= + = 2 (2.11)
𝑅 𝑅1 𝑅2 𝑅1 × 𝑅2
so that
𝑅1 × 𝑅2
𝑅= (2.12)
𝑅1 + 𝑅2
is obtained. Thus, it is shown that a real voltage source with an ideal voltage source of
the value 𝑣o and a resistor 𝑅 replaces the voltage divider in any regard that deals with
its output (Figure 2.8c).
The main purpose of this exercise is to stress the importance of the voltage divider
equation and the equation describing the parallel configuration. Probably, anybody
who deals with electronic networks knows these equations by heart to spare time.
Replacing a current divider according to Norton’s theorem by a real current source is
dual to above approach.
Problems
2.14. Do partial electric signals in nonlinear networks add up (linearly) to the total
one?
2.16. Two real voltage sources (𝑣S1 = 2.4 V, 𝑅S1 = 1 kΩ, 𝑣S2 = 1.2 V, 𝑅S2 = 1 kΩ)
supply in parallel a load resistor of 𝑅L = 100 Ω. Calculate all three currents.
2.1 One-ports | 25
6 kΩ 1 kΩ
+
+ +
10 V −
𝑣x 2 kΩ
− 5V
− Fig. 2.9. Linear circuit from Prob-
lem 2.17.
2 mS 𝑖x
10 mA 6 mS 1 mS 5 mA
Fig. 2.10. Linear circuit from
Problem 2.18.
2.17. Find the values of 𝑣x for both switch positions in Figure 2.9 after simplifying
the 10 V voltage divider circuit using Thevenin’s theorem and using the superposition
theorem.
2.18. Find the values of 𝑖x for both switch positions in Figure 2.10, applying Norton’s
theorem to the 10 mA-current divider circuit and using the superposition theorem.
Example 2.3 (Combined voltage signal). Figure 2.11a shows a combination of two ideal
voltage sources loaded by a resistor of 1 kΩ and Figure 2.11b shows the resulting volt-
age at the resistor. One source delivers a quiescent voltage 𝑉op = 6 V and the other
source delivers a voltage 𝑣s that switches between −1 V and +1 V resulting in a com-
bined voltage between 5 V and 7 V as shown in Figure 2.11b. The current through the
resistor is the superposition of 𝑖S , a current switched between −1 mA and +1 mA, with
the quiescent current 𝐼op of 6 mA.
8
+
𝑖𝑅 = 𝑖op + 𝑖S 6
𝑣S
vR
− 4
(V)
𝑅 =1 kΩ
+ 2
+
𝑣op =6 V
− 0
0 2 4 6 8 10 12
−
time (s)
Fig. 2.11a. Superposition of two voltage sources. Fig. 2.11b. The resulting voltage across the resis-
tor 𝑅.
26 | 2 Static linear networks
Let us look backward from the result of the example. The output signal shown in Fig-
ure 2.11b is moderately complicated and dealing with it as a whole is not straightfor-
ward. However, taking advantage of the superposition theorem makes life much eas-
ier. Splitting the output voltage into two voltages, transforms one complicated task
into two easy tasks. This is a generally obeyed practice. The quiescent voltage 𝑉op
is the operating point voltage of the device (resistor) and the changing voltage 𝑣S is
considered as signal. So we have an operating point of the resistor of (𝑉op = 6 V,
𝐼op = 6 mA) or more compact (6 V, 6 mA) and a signal voltage 𝑣S of ±1 V.
Problem
2.19. Avoid bipolar signals for the signal source. Find the two obvious combinations
of voltage sources that would also give the voltage pattern shown in Figure 2.11b.
Problems
2.20. A (passive) meter has an input resistance of 2.5 kΩ and has full-scale reading
at an input power of 4 μW. By adding one one-port, this meter can be made to be used
as voltmeter for measuring voltages up to 100 V. Which arrangement is necessary and
what are the essential properties of this one-port?
2.21. A (passive) meter has an input conductance of 0.4 mS and has full scale reading
at an input power of 4 μW. By adding one one-port, this meter can be made to be used
as an ammeter for measuring currents up to 1 A. Which arrangement is necessary and
what are the essential properties of this one-port?
2.22. An active voltmeter has in the 1 V range a burden current of less than 10 pA, i.e.
it draws from the circuit to be measured at most that much current. The reading, when
measuring the output voltage of a real voltage source having an internal resistance of
0.1 Ω, is 1.000 000 00 V. Is there a need to correct the reading for the loading of the
circuit?
2.23. An active ammeter has in the 0.1 μA range a burden voltage of 55 mV, i.e. at full
scale it draws from the circuit to be measured a power of 5.5 nW. The reading when
measuring the output current of a real current source having an internal conductance
of 1 μS is 1.000 00. Is there a need to correct the reading for loading the circuit?
a b
𝑖m 𝑖m
𝑅A 𝑅A
A A
+
𝑅V 𝑣m
𝑖x + V 𝑖x +
− +
𝑅x 𝑣x 𝐺x 𝑣x 𝐺V 𝑣m
V
− − −
Fig. 2.12. Simultaneous measurement of the current and the voltage through an unknown compo-
nent: (a) with the amperemeter in series, (b) with the voltmeter in parallel.
28 | 2 Static linear networks
the sum of the unknown resistor 𝑅x with the resistance 𝑅A of the ammeter is obtained
𝑣m /𝑖m = 𝑅x +𝑅A . Consequently, the value of the unknown resistor is 𝑅x = 𝑣m /𝑖m −𝑅A .
The correct value is smaller than the measured ratio suggests; it must be corrected by
subtracting the resistance of the ammeter.
The arrangement in Figure 2.12b is dual. Exchanging the current (meter) with a
voltage (meter), and resistance with conductance, and paying attention to the duality
of parallel and serial configuration, results in equations with an identical structure.
Problems
2.24. Correction factors
(a) Give the correction factor for the unknown resistance rather than unknown con-
ductance, as obtained by duality for the configuration in Figure 2.12b.
(b) Which of the two configurations in Figure 2.12 requires a smaller correction factor
under the assumption that the resistor 𝑅x = 1/𝐺x = 10 kΩ with 𝑅A = 100 Ω,
and 𝑅V = 250 kΩ? Give the correction factor of the measured value for either
configuration in per cent.
2.25. The voltage division 𝑔𝑣 of a voltage divider (consisting of two 200 kΩ resistors
in series) is measured with an active voltmeter (𝑅V = 10 MΩ) using the 10 V range.
The voltage is provided by a voltage source with negligible output impedance. The
measured total voltage 𝑣i = 10.000 0 V, the measured divided voltage 𝑣o = 4.950 9 V
(the uncertainties of these two measurements are assumed to be negligible).
(a) Give the value for the amount of voltage division 𝑔𝑣 corrected for the loading by
the voltmeter.
(b) Explain why the (absolute) calibration of the voltmeter has no impact on the re-
sult.
Caution: The display (indicator) of a meter shows, at best, the electric power dissipated in the
input of the meter. Such a result must be corrected for the burden that the meter is for the circuit.
However, it cannot be corrected if the signal that the meter measures is different from what it is
supposed to be. In doubtful cases, the use of an oscilloscope is unavoidable.
2.1 One-ports | 29
It helps visually oriented people in understanding circuits when the assumed (or even
better actual) quiescent current flows from top to bottom. Therefore, the circuit ele-
ments in most circuit diagrams should be vertically oriented. Often it pays to redraw
the circuit diagram arranging the elements in such a way that it is intuitively clear
in which direction the current flows. Thus, the highest positive voltage will be at the
top, the lowest (negative) voltage at the bottom. In cases where no negative voltage is
present the ground, the reference point, at which the voltage is assumed to be zero,
will be at the bottom. If this point is connected to earth potential, it might be called
earth instead.
Redundant components (e.g., elements in series to a current source or parallel to
a voltage source) should be left out as well as functionless ones (e.g., one-ports with
only one terminal connected). Elements of the same kind in series or in parallel may
be (properly) combined to one element as long as no relevant information gets lost.
Depending on the problem at hand the source can be converted to the proper kind
(applying Thevenin’s or Norton’s theorem, respectively). If components are arranged
in parallel a current source is appropriate if arranged in series a voltage source.
Two elements having the same electrical characteristic are electronically indistinguishable, i.e.
the behave the same way.
30 | 2 Static linear networks
a b
60 100
50
Current (mA) 10
Current (mA)
40
30 di 1
20 (0.64 V,13.1 mA)
o.p. 0.1
10
dv
0 0.01
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
Voltage (V) Voltage (V)
Fig. 2.13. Forward characteristic of a junction diode: (a) “normal” presentation with operating point,
(b) semilogarithmic presentation.
In a circuit an electronic component will usually have a bias, i.e. a voltage–current pair
constituting the operating point. Without signal, the voltage across the component and
the current through it is given by these quiescent values.
Problem
2.27. Give the operating point of a passive (Section 2.2.4) electronic component that
is not connected to any power supply.
In Figure 2.13a an operating point (o.p.) is shown together with the tangent through it.
The slope of this tangent usually depends on the position of the operating point. The
ratio d𝑣/d𝑖 is called (small-signal) impedance 𝑍 of said component
d𝑣
𝑍= . (2.13)
d𝑖 o.p.
In view of duality, we introduce, in addition, the ratio d𝑖/d𝑣 which we call (small-
signal) admittance 𝑌
d𝑖
𝑌= . (2.14)
d𝑣 o.p.
The impedance (and admittance) of linear components is obviously independent of
the operating point so that we arrive at
𝑣 𝑖
𝑍= , and 𝑌 = . (2.15)
𝑖 𝑣
This relation we know as Ohm’s law (and its reverse) with the resistance replaced
by the more general term impedance, and the conductance by admittance. In the fu-
ture, we will refrain, whenever appropriate, from using the term resistance (and con-
ductance) and will be using the more general terms impedance (and admittance) in-
stead.
2.1 One-ports | 31
An operating point may not even temporarily exceed a maximum voltage or current
(Section 2.3.1). Even before these limits are reached, it is plausible that the voltage-
to-current ratio of a linear device will change, i.e. that it will become nonlinear, be-
fore the device is destroyed by too high power, current, or voltage. So, components
with ideal linear characteristic do not exist, they will be linear just in a limited operat-
ing range, called small-signal operating range (or small-signal dynamic range). Such a
range could be as large as several hundred (even thousand) volts or as small as, say,
10 mV.
The nonlinearity of a “linear” component (e.g., resistor) is an unwanted byprod-
uct. However, there are several nonlinear components with very useful properties.
Some such i–v-characteristics are sketched in Figure 2.14.
In many applications, the property of a diode is best described as that of a current
check valve. Current is allowed to flow in one direction, but not (or nearly so) in the
other. The allowed direction is called forward direction indicated by the arrow in the
a b
8 6
6 4
Current (mA)
Current (mA)
4 2
2 0
0 -2
-2 -4
-4 -6
-1.0 -0.5 0.0 0.5 1.0 -8.0 -6.0 -4.0 -2.0 0.0 2.0
Voltage (V) Voltage (V)
c d
1 8
6
Current (mA)
Current (mA)
.5 4
2
0 0
-2
-0.5 -4
-0.2 0.0 0.2 0.4 0.6 0.8 -40 0 40 80 120
Voltage (V) Voltage (V)
Fig. 2.14. Typical i-v-characteristics of some nonlinear elements: (a) diode, (b) Zener diode with
𝑣Z = 6.0 V, (c) N-shaped, and (d) S-shaped.
32 | 2 Static linear networks
symbol, the current in the reverse direction 𝑖r is usually very small depending on the
technology used to produce the diode.
With Zener diodes advantage is taken of the (very) low impedance occurring at a
characteristic reverse voltage 𝑣Z . If the operating point is set there, the voltage does
not change much with current, as required for a voltage source. Consequently, such
Zener (or breakdown) voltages can be used to establish a reference voltage for a power
supply.
The two characteristics in Figures 2.14c and 2.14d have a portion with negative
impedance. There
d𝑣
𝑍= <0. (2.16)
d𝑖 o.p.
In addition, they are dual to each other, i.e. exchanging 𝑣 with 𝑖 converts one into
the other. Any (simple) characteristic with a portion of negative impedance must ei-
ther be S- or N-shaped because the (more or less) linear region with negative impe-
dance must be connected to the origin (0 V, 0 mA). One-ports with negative impedance
may be used for the amplification of signal power (Section 2.2.5), and therefore, are
called active one-ports (devices). There exist also dynamic negative impedances (Sec-
tion 2.4.3.2), i.e. circuits in which negative impedance is brought about by (positive)
feedback action (Section 2.5.4.1).
The reasons for linearization were outlined in Section 1.3.3. Linearization is done by
approximating (some part of) an i–v-characteristic by a straight line. By specifying the
required closeness between the curve and its linear approximation, a range both for
the voltage and the current is defined, in which this approximation is valid. Signals
within this range are called small-=signals, independent of their actual size.
Any signal may be decomposed into the quiescent component (operating point
values) and a variable signal (Section 2.1.3.5). If this variable signal is a small-signal,
then one can operate with the linear impedance 𝑍 making life easy. In that case, in
a model circuit the nonlinear element is replaced by 𝑍 under the tacit assumption
that the bias of the circuit is such that the correct operating point is set. Thus, we are
dealing with a linear circuit.
Problem
2.28. A current of 0.25 mA flows through some nonlinear element if a voltage of 0.5 V
is applied. When this voltage is −10 V, the current is 0.002 mA. What is the (small-
signal) impedance of this one-port?
In Figure 2.15 the characteristics of Figure 2.14 are linearized in a way that they resem-
ble the original ones. In the case of the diode, a moderately complicated model may
2.1 One-ports | 33
a b
8 6
6 4
Current (mA)
Current (mA)
4 2
2 0
0 -2
-2 -4
-4 -6
-1.0 -0.5 0.0 0.5 1.0 -8.0 -6.0 -4.0 -2.0 0.0 2.0
Voltage (V) Voltage (V)
c d
1 8
6
Current (mA)
.5 Current (mA) 4
2
0 0
-2
-0.5 -4
-0.2 0.0 0.2 0.4 0.6 0.8 -40 0 40 80 120
Voltage (V) Voltage (V)
𝑖d
+
𝑅f
𝑣d
D
+
𝑅r-f
𝑉f
−
be applied using an ideal diode D, a voltage source 𝑉f , and two resistors 𝑅f and 𝑅r–f
(see Figure 2.16).
Example 2.5 (Linear model of a junction diode). A simple linear model of a junction
diode needs three parameters, the forward impedance 𝑅f , the reverse impedance 𝑅r
and the forward voltage 𝑉f , plus an ideal current check valve, an ideal diode D. The
values of these parameters vary according to the technology of the diode: 𝑅f = 100 to
101 Ω, 𝑅r = 105 Ω to 107 Ω, and 𝑉f = 0.3 V to 0.7 V. If the ideal diode does not conduct,
it has the admittance 𝑌D = 0 S, and the voltage across the ideal diode is given by
34 | 2 Static linear networks
𝑣d ×(𝑅r −𝑅f)/𝑅r −𝑉f , and the impedance of the one-port 𝑍d = 𝑣d /𝑖d = 𝑅r . For voltages
𝑣d ≥ 𝑉f /(1 − 𝑅f/𝑅r ) the ideal diode is conducting having an impedance of 𝑍D = 0 Ω
and the impedance of the one-port is 𝑍d = 𝑅f . Thus, at the point 𝑣d = 𝑉f /(1 − 𝑅f /𝑅r )
and 𝑖d = 𝑉f /(𝑅r −𝑅f ) the slope of the characteristic changes from 1/𝑅f to 1/𝑅r . Observe
that the voltage source used in this model cannot be used as active source. It does not
supply power because the reverse-biased ideal diode prohibits the current to flow out
of the source. It just sinks current maintaining its voltage.
Problem
2.29. The characteristic of a Zener diode is well described by a straight line intersect-
ing the abscissa at the reverse breakdown voltage 𝑉Z . From the slope of this line, the
(constant) small-signal impedance 𝑍Z (or 𝑟Z ) is obtained. With 𝑉Z = 5.96 V, an operat-
ing point on the abscissa (5.96 V, 0.00 mA) is defined. With 𝑍Z = 2.5 Ω, the operating
point at any current can be calculated within this model. Find the parameters of the
small-signal model for the following circuit at the given operating point: A Zener diode
with the above properties is (reverse) biased by a voltage source of 𝑉s = 13.5 V and
a series resistor of 390 Ω. The load resistor that shunts the Zener diode has 390 Ω, as
well. Replace the linear network by a real voltage source (Thevenin’s theorem) express-
ing the model properties by means of another real voltage source. Then calculate the
current in this loop, and the voltage at the terminals of the Zener diode model yielding
the actual operating point.
𝑉S − 𝑣ne
𝑖ne = or
𝑅S
𝑣 𝑉
𝑖ne = − ne + S
𝑅S 𝑅S
which is the equation of a straight line, the so-called load line because 𝑅S functions
as a load for the nonlinear element. This line is easily constructed by the zero voltage
point with 𝑖ne0 = 𝑉S /𝑅S and the zero current point with 𝑣ne0 = 𝑉S . Independent of the
characteristic of the nonlinear element, its operating point must lie on this load line.
The actual operating point is the intersection of the (unknown) characteristic with the
2.1 One-ports | 35
𝑖ne
𝑉S
𝑅S
𝑅S 𝑖ne
+ (𝑉op , 𝐼op )
+
+
𝑣S 𝑣ne n.e.
−
− 𝑣ne
− 𝑉S
Fig. 2.17a. A nonlinear element at the output of a Fig. 2.17b. The operating point must lie on the
real voltage source. load line independent of the characteristic of the
nonlinear element. As an example the character-
istic of a tunnel diode is shown.
load line. In Figure 2.17b the load line is shown. As an example the characteristic of a
tunnel diode is given, too.
Thus, all possible operating points must lie on the load line, i.e. the two-dimen-
sional manifold of possible operating points has been reduced to a one-dimensional
one. If, in addition, the maximum allowable power (Section 2.3.1) of the nonlinear el-
ement is known, then this information can be added by way of the maximum-power
hyperbola. The equation 𝑃 = 𝐼 × 𝑉 is that of an equilateral hyperbola. If the operat-
ing point is outside the maximum-power hyperbola then the nonlinear element would
be destroyed by too high a temperature (Section 2.3.1). Therefore, if the load line does
not intersect the maximum-power hyperbola, one can be assured that the power dissi-
pated in the nonlinear element will not exceed the maximum allowed power indepen-
dent of the actual characteristic. As pointed out in Section 2.2.4.1 the maximum power
of an operating points is in the middle of the dynamic range. There the hyperbola has
its closest approach to the load line.
Problems
2.30. The linear network in Figure 2.18 is intended to feed any (nonlinear) one-port.
(a) Determine the maximum power that this circuit can deliver into any one-port.
(b) Draw the load line into an i–v diagram.
(c) If the characteristic of the one-port is such, that the current of the operating point
is 10 mA, how large is the power dissipated in that one-port?
1 kΩ 500 Ω
+
10 V − 10 mA 1 kΩ ...
Fig. 2.18. Network of Prob-
lem 2.30.
36 | 2 Static linear networks
2.31. To protect a nonlinear element with unknown characteristic but a known max-
imum power rating of 0.1 W a resistor of 1 kΩ is inserted between the element and the
voltage source.
(a) What is the maximum allowed voltage 𝑉S max that may be applied without damag-
ing any nonlinear element?
(b) What is the worst case maximum power dissipated in the 1 kΩ resistor?
Electrical networks that modify signals can be described by two-ports having an input
and an output port. The modification can embrace the amplitude or/and the time be-
havior. Any two-port may be used with input and output exchanged. This is reflected
in the so-called two-port convention defining the sign of the currents and the voltages
at the two-port (Figure 2.19). In this book, we use the subscripts “i” to indicate input
variables and “o” to designate output variables. In some instances (e.g., two-port ma-
trices) the alternative method of using subscripts 1 and 2, respectively, makes more
sense.
The symmetry between (assumed) output and input is important. In many cases
there will be an obvious (natural) input/output.
Two-ports may be arranged in a chain of two or more (even an infinite number of)
two-ports connected in a cascade (with each output port connected to the input port
of the following).
Problem
2.32. There will be a step-up transformer feeding a high-voltage transmission line
and a step-down transformer to feed the utilities. Viewing transformers as two-ports,
is it possible to recognize which port is the natural input port?
Two-ports in which the output signal can contain more signal power than is dissipated
in the input are called active two-ports. Such amplification can be symbolized by the
𝑖i 𝑖o
+ +
𝑣i 𝑖i 𝑖o 𝑣o
− −
Fig. 2.19. Definition of the signs of the electrical signals at a
𝑖i 𝑖o two-port.
2.2 Two-port network models | 37
𝑖i
+ + +
𝑣i 𝑣S + 𝑣o 𝑣S + 𝑣o
− −
− − −
𝑖i
Fig. 2.20a. Voltage-controlled voltage source Fig. 2.20b. Current-controlled voltage source
with 𝑣S = 𝑓(𝑣i ). with 𝑣S = 𝑓(𝑖i ).
𝑖i 𝑖o 𝑖o
+
𝑖S 𝑣i 𝑖S
−
𝑖i 𝑖o 𝑖o
Fig. 2.20c. Current-controlled current source with Fig. 2.20d. Voltage-controlled current source
𝑖S = 𝑓(𝑖i ). with 𝑖S = 𝑓(𝑣i ).
use of a controlled electric source inside the two-port. Figure 2.20 summarizes the four
types of controlled sources.
By applying repeatedly the principles of duality on a voltage-controlled voltage
source with 𝑣S = 𝑓(𝑣i ) the three other types are easily arrived at: voltage-controlled
current source with 𝑖S = 𝑓(𝑣i ), current-controlled current source with 𝑖S = 𝑓(𝑖i ),
current-controlled voltage source with 𝑣S = 𝑓(𝑖i ), thus defining the four (forward)
transfer parameters:
– voltage gain 𝑔𝑣 = 𝑣o /𝑣i ,
– transadmittance (mutual conductance) 𝑔m = 𝑖o /𝑣i ,
– current gain 𝑔𝑖 = 𝑖o /𝑖i , and
– transimpedance (mutual resistance) 𝑟m = 𝑣o /𝑖i .
In view of the basic symmetry of a two-port, there will be two controlled sources in
any practical two-port. One for the forward transfer, and one for the reverse transfer,
according to the choice of the input port. The reverse transfer takes care of the internal
feedback of the output signal to the input signal.
38 | 2 Static linear networks
To describe voltage and current both at the input and output, one needs four indepen-
dent parameters. The best presentation depends on the application. Depending on
the type of transfer parameter there is a choice of four linear models of two-ports. Of
course, the parameters of one model can be converted into those of any other model.
When connected to other two-ports, there is an optimum model depending on the
configuration. (Remember: Instead of the term resistor we use the more general word
impedance, and instead of conductor the more general term admittance.)
where
𝑔iis the input admittance with an open-circuit at the output,
𝑔ris the reverse current gain with a short-circuit at the input,
𝑔fis the forward voltage gain with an open-circuit at the output,
𝑔o is the output impedance when there is a short-circuit at the input.
Observe that each of the parameters has a different dimension because the fixed pa-
rameter for the input variables is the output current whereas it is the input voltage for
𝑖i 𝑖o
𝑅S
+ 𝑔o +
+
𝑣S +
𝑣i 𝑔f ×𝑣i + 𝑣o
1/𝑔i 𝑔r ×𝑖o 𝑍L
− −
−
− −
𝑖i 𝑖o
Fig. 2.21. Inverse hybrid parameter two-port with an independent voltage source 𝑣S , source resis-
tance 𝑅S at the input and load impedance 𝑍L at the output as well as two dependent sources de-
scribing the forward and reverse transfer of the two-port.
2.2 Two-port network models | 39
the output variables. This mixing is reflected in the term hybrid. The full name of this
parameter choice is inverse hybrid parameters because of the reverse current gain.
The 𝑔-parameters are the optimum option of two-port parameters when two two-
ports are connected in a parallel–series configuration as done in parallel–series feed-
back (Section 2.4.3).
Problem
2.33. A two-port described by the following 𝑔-parameters is loaded by a 10 kΩ resis-
tor: 𝑔i = 1 mS, 𝑔r = 0, 𝑔f = 110, and 𝑔o = 100 kΩ.
(a) Determine the current gain.
(b) Give the output impedance.
𝑖i = 𝑦i 𝑣i + 𝑦r 𝑣o (2.19a)
𝑖o = 𝑦f 𝑣i + 𝑦o 𝑣o (2.19b)
with
𝑖i 𝑖i
𝑦i = 𝑦r = (2.20a)
𝑣i 𝑣o =0 𝑣o 𝑣i =0
𝑖 𝑖
𝑦f = o 𝑦o = o (2.20b)
𝑣i 𝑣 =0 o
𝑣o 𝑣 =0
i
where
𝑦i is the input admittance when there is a short-circuit at the output,
𝑦r is the reverse transadmittance with a short-circuit at the input,
𝑦f is the forward transadmittance with a short-circuit at the output,
𝑦o is the output admittance when there is a short-circuit at the input.
𝑖i 𝑖o
𝑅S
+ +
+
+
𝑣S 𝑣i 1/𝑦i 𝑦r ×𝑣o 𝑦f ×𝑣i 1/𝑦o 𝑣o 𝑍L
−
−
− −
𝑖i 𝑖o
Fig. 2.22. Admittance-parameter two-port with an independent real voltage source 𝑣S at the input
and load impedance 𝑍L at the output as well as two dependent current sources describing the for-
ward and reverse transfer of the two-port.
40 | 2 Static linear networks
All 𝑦-parameters are admittances with the unit 1 S. The 𝑦-parameters are the opti-
mum option of two-port parameters when two two-ports are connected in a parallel–
parallel configuration as used in parallel–parallel feedback (Section 2.4.3).
Problem
2.34. A two-port described by the following 𝑦-parameters is loaded by a 100 kΩ re-
sistor: 𝑦i = 1 mS, 𝑦r = 0 mS, 𝑦f = 1 mS, 𝑦o = 0.01 mS.
(a) Determine the voltage gain.
(b) Give the output impedance.
𝑣i = 𝑧i 𝑖i + 𝑧r 𝑖o (2.21a)
𝑣o = 𝑧f 𝑖i + 𝑧o 𝑖o (2.21b)
with
𝑣i 𝑣i
𝑧i = 𝑧r = (2.22a)
𝑖i 𝑖o =0 𝑖o 𝑖i =0
𝑣 𝑣
𝑧f = o 𝑧o = o (2.22b)
𝑖i 𝑖 =0o
𝑖o 𝑖 =0
i
where
𝑧i is the input impedance with open-circuit output,
𝑧r is the reverse transimpedance with open-circuit input,
𝑧f is the forward transimpedance with open-circuit output,
𝑧o is the output impedance with open-circuit input.
All 𝑧-parameters are impedances with the unit 1 Ω. The 𝑧-parameters are the optimum
option of two-port parameters when two two-ports are connected in a series–series
configuration as used in series–series feedback (Section 2.4.3).
𝑣i = ℎi 𝑖i + ℎr 𝑣o (2.23a)
𝑖o = ℎf 𝑖i + ℎo 𝑣o (2.23b)
with
𝑣i 𝑣i
ℎi = ℎr = (2.24a)
𝑖i 𝑣o =0 𝑣o 𝑖i =0
2.2 Two-port network models | 41
𝑖i 𝑖o
+ 𝑧i 𝑧o +
+ + 𝑣o
𝑖S 𝑅S 𝑣i 𝑧r ×𝑖o 𝑧f ×𝑖i 𝑍L
− −
− −
𝑖i 𝑖o
Fig. 2.23. Impedance parameter two-port showing an independent real current source 𝑖S at the input
and the load impedance 𝑍L at the output as well as two dependent voltage sources describing the
forward and reverse transfer of the two-port.
𝑖i 𝑖o
+ ℎi +
+
𝑖S 𝑅S 𝑣i ℎr ×𝑣o ℎf ×𝑖i 1/ℎo 𝑣o 𝑍L
−
− −
𝑖i 𝑖o
Fig. 2.24. Hybrid-parameter two-port with an independent real current source 𝑖S , load impedance
𝑍L , and two dependent sources.
𝑖o 𝑖o
ℎf = ℎo = (2.24b)
𝑖i 𝑣o =0 𝑣o 𝑖i =0
where
ℎiis the input impedance when there is a short-circuit at the output,
ℎris the reverse voltage gain with an open-circuit at the input,
ℎfis the forward current gain with a short-circuit at the output,
ℎo is the output admittance with open-circuit input.
Observe that the mixing of the independent variables (voltage at the input, current at
the output) results in the name hybrid parameters (characterized by the forward cur-
rent gain). The ℎ-parameters are the optimum option of two-port parameters when two
two-ports are connected in a series–parallel configuration as used in series–parallel
feedback (Section 2.4.3).
2.2.2.5 Summary
Which kind of parameter family should be chosen, depends on the configuration of
the other two-ports or on the kind of transfer parameter that is desired. Alone from
42 | 2 Static linear networks
the fact that the input parameters are given under the condition 𝑣o = 0 (short-circuit
at the output), or 𝑖o = 0 (open-circuit at the output) it is evident
that the input properties depend on the load impedance. In analogy the output properties depend
on the source impedance.
The characteristic property of a one-port is just one parameter, its impedance, relating
a change in current to its change in voltage. However, linear two-ports are described
by two coupled linear equations, relating input voltage and current to output voltage
and current.
However, when loading the output of a two-port with the impedance 𝑍L , the rela-
tion between 𝑣o and 𝑖o
𝑣o = 𝑍L × (−𝑖o ) (2.25)
reduces the number of free parameters to one. Therefore, the situation at the input can
be expressed by the input impedance 𝑍i which depends on the four two-port parame-
ters and 𝑍L . Using the 𝑧-parameters (that are appropriate for this case) the following
relation is easily obtained from Figure 2.23:
𝑧r × 𝑧f
𝑣i = 𝑍i × 𝑖i = (𝑧i − ) × 𝑖i . (2.26)
𝑍L + 𝑧o
Because 𝑍i depends on gain (the transimpedances 𝑧f and 𝑧r ), it is a dynamic im-
pedance. We will deal some more with dynamic impedances in Section 2.4.3.2.
Sloppily, the transfer parameters are called gain. However, it is difficult to visualize
transimpedance and transadmittance as “gain” because gain is usually thought to
be a dimensionless ratio. This leads us back to the beginning of Chapter 1. There we
stressed the importance of power over current and voltage. Consequently, what counts
is power gain 𝑔𝑝 which can be calculated using any of the four transfer parameters.
Only amplifiers that can provide power gain are of any value.
Power gain is a measure of signal transfer from the two-port input to its output.
Therefore, it is a function of its parameters and its load 𝑍L . It is given by the ratio of
power 𝑝L dissipated in the circuit load over the power 𝑝i dissipated in the input of the
circuit
𝑝
𝑔p = L . (2.27)
𝑝i
2.2 Two-port network models | 43
𝑍S 𝑖
+
+
𝑣S 𝑅L 𝑣L
−
𝑝 = 𝑣 × 𝑖 = 𝑖 × 𝑅 × 𝑖 = 𝑖2 × 𝑅 (2.28)
𝑝L 𝑖2o × 𝑍L 𝑍
𝑔𝑝 = = 2 = 𝑔𝑖2 × L . (2.29)
𝑝i 𝑖i × 𝑍i 𝑍i
The current gain 𝑔𝑖 is readily obtained from (2.21b) and Figure 2.23 as
𝑖o 𝑧f
𝑔𝑖 = =− (2.30)
𝑖i 𝑧o + 𝑍L
yielding with (2.26)
𝑧f2 𝑍L
𝑔𝑝 = × . (2.31)
𝑧o + 𝑍L 𝑧i × (𝑧o + 𝑍L ) − 𝑧r 𝑧f
An amplifier does not amplify power, but amplification is a controlled conversion of power from a
power supply into signal power.
𝑣S − 2𝑣L = 0 (2.34)
2.37. The maximum power delivered into a one-port occurs when half of the source
voltage lies across the one-port. What is the maximum power 𝑝Smax that can occur
across the source resistance, i.e., for any load, when compared to the maximum power
of the load one-port 𝑝Lmax ?
18 kΩ 21 kΩ
+
30 V − 18 kΩ 15 kΩ 𝑅x
Fig. 2.26. Circuit from Problem
w 2.35.
18 kΩ 9 kΩ
+
24 V − 18 kΩ 18 kΩ 𝑅x
Fig. 2.27. Circuit from Problem
2.36.
2.2 Two-port network models | 45
Problem
2.38. A voltage attenuation is given by −3 dB. What fraction is that?
A linear one-port element with negative impedance −𝑍 is placed into a two-port. This
two-port has a load impedance of 𝑍L . Its input signal comes from a linear network
that, according to Norton’s theorem, has been replaced by a real current source with
a source admittance 𝑌S . At this point, we switch from impedances to admittances and
arrive at the circuit of Figure 2.28 where the signal current source is shunted by the
source admittance 𝑌S , the admittance of the component with negative impedance −𝑌,
and the admittance of the load 𝑌L . The maximum power a source can deliver to a load
is with admittance matching (Section 2.2.4.1). Thus, the maximum power is
𝑖2S 1
𝑝L,max = × (2.39)
𝑌S 4
This is the maximum available power from the generator. The actual power depends
on the current division due to the value of 𝑌L
𝑖2L 𝑌L
𝑝L = = 𝑖2S × (2.40)
𝑌L (𝑌S + 𝑌L )2
By considering the shunting negative admittance, the current division changes affect-
ing the denominator of the above equation to
𝑖2L 𝑌L
𝑝L = = 𝑖2S × (2.41)
𝑌L (𝑌S + 𝑌L − 𝑌)2
By dividing this power by the maximum power that is available from the source, we
arrive at a power gain 𝑔𝑝 of
𝑝L 4 × 𝑌S × 𝑌L
𝑔𝑝 = = (2.42)
𝑝L,max (𝑌S + 𝑌L − 𝑌)2
If the sum of the positive admittances is made equal to the magnitude of the negative
admittance −𝑌 the denominator becomes zero and the power gain infinite. The degree
𝑖S 𝑌S −𝑌 𝑌L
of agreement between the sum 𝑌S + 𝑌L and |𝑌| will determine the gain. For |𝑌| > 𝑌S +
𝑌L , the power gain would be negative indicating that the circuit is unstable under this
condition. At an operating point with smaller absolute admittance |𝑌| this instability
is circumvented. Due to the nonlinearity of the 𝑖-𝑣-characteristic, such an operating
point will always be available.
Elements that can provide power gain are called active elements. As we could
show, one-ports qualify if they contain negative impedance. Practical devices with
negative impedance are gas discharge devices (glow lamps), and semiconductor com-
ponents (e.g., the old-fashioned tunnel diode). Clearly, there is no source of electric
power in such one-ports. As signal amplification means conversion of some electric
power into signal power (Section 2.2.4) there is no need of such a power source.
Problem
2.39. In the circuit of Figure 2.28 the source impedance is 600 Ω and the load 𝑍L is a
resistor of 100 Ω.
(a) What is the minimum absolute value of the negative impedance of the active ele-
ment to ensure that the power gain 𝑔𝑝 > 1?
(b) Give the value of the voltage transfer 𝑔𝑣 under above condition.
are three-terminal elements. When used as amplifier they, inevitably, become two-
ports with input and output port. To investigate how this is achieved we need to use
one of them as an example. At the risk of being coined “old-fashioned,” we decided on
the bipolar junction transistor because its properties are far off ideal ones, so that all
four parameters must be used for its description, i.e. none of them may be disregarded.
There are six ways of arranging a three-terminal device in a two-port. Taking into
account that one of the three terminals is the “natural” input lead that must be con-
nected to the input there remain three qualified configurations to be considered. Each
of these configurations is named after that electrode of the three-terminal device that
is common to input and output, e.g., for a bipolar junction transistor
– common-emitter circuit,
– common-base circuit, and
– common-collector circuit.
48 | 2 Static linear networks
a b
𝑖b 𝛽×𝑖b 𝑖c 𝑖i 𝑖o
𝑟b ℎi
+ + + +
B C
ℎf ×𝑖i
𝑟c + 1
𝑣i 𝑣o 𝑣i ℎr ×𝑣o 𝑣o
− ℎo
𝑟e
− − − −
E E
Fig. 2.29. The T-model of a three-terminal device placed into a two-port (a) collated with a two-port
using h-parameters (b).
Actually, only the configuration with the highest power gain must be considered (e.g.,
the common-emitter circuit, Figure 2.31). The other two can be understood as feed-
back circuits, as will be discussed in detail later (Section 2.4.3.5). Nowadays, three ter-
minal devices are only used in special applications (e.g., high-frequency oscillators,
Section 4.2.1.7) because using ready made integrated amplifiers is in most cases more
convenient, cheaper and gives more reliable results.
The appropriate model of a three-terminal device is the T-model. Figure 2.29a
shows how a small-signal T-model of a bipolar transistor with its four components
(parameters) placed into a two-port. One input terminal is connected to one output
terminal, establishing a ground terminal. This fact gives rise to alternative names,
e.g., grounded-emitter circuit instead of common-emitter circuit. Figure 2.29b shows
a two-port with its four ℎ-parameters. The dependent source in the T-model is a
current-controlled current source. Consequently, the hybrid parameter presentation
of the two-port is chosen because the characteristic transfer parameter ℎf is the short-
circuit current gain. The current 𝑖b into the electrode B (which is the natural input)
controls the current source (𝛽×𝑖b ). It is at the same time the input current 𝑖i of the two-
port. The electrode common to output and input is E, it would be called common-E
circuit. The ℎ-parameters are obtained from the four T-model parameters as
1+𝛽
ℎi = 𝑟b + 𝑟e (2.43)
1 + 𝑟e /𝑟c
𝑟e
ℎr = (2.44)
𝑟c + 𝑟e
𝛽𝑟c − 𝑟e
ℎf = (2.45)
𝑟c + 𝑟e
1
ℎo = . (2.46)
𝑟c + 𝑟e
2.2 Two-port network models | 49
On the other hand, the four T-parameters can be expressed by the h-parameters as
follows:
ℎr
𝑟e = (2.47)
ℎo
ℎr × (ℎf + 1)
𝑟b = ℎi − (2.48)
ℎo
1 − ℎr
𝑟c = (2.49)
ℎo
ℎ + ℎf
𝛽= r (2.50)
1 − ℎr
(2.51)
𝑣i = −𝑖i × 𝑍S . (2.55)
From (2.53) and (2.54) we get for the current gain 𝑔𝑖 of the two-port burdened with an
impedance 𝑍L
𝑖o
𝑔𝑖 =
𝑖i
𝑟e − 𝛽𝑟c
=−
𝑍L + 𝑟e + 𝑟c
𝛽 1
= − . (2.56)
𝑟e + 𝑍L 𝑟c + 𝑍L
1+ 1+
𝑟c 𝑟e
With 𝑟c ≫ 𝑟e one gets 𝑔𝑖 ≈ 𝛽 for very small 𝑍L (short-circuit), and 𝑔𝑖 = 0 for obvious
reasons, when the load is an open-circuit.
50 | 2 Static linear networks
𝛽+1
𝑍i = 𝑟b + 𝑟e × 𝑟 (2.59)
1+ e
𝑟c
and with 𝑟c ≫ 𝑟e it is 𝑍i ≈ 𝑟b + 𝑟e × (𝛽 + 1). Again the latter result is obvious
when scrutinizing Figure 2.29a. The input impedance is the base resistor 𝑟b in series
to the dynamic impedance of the emitter resistance 𝑟e × (𝑖o + 𝑖i )/𝑖i = 𝑟e × (𝑔𝑖 + 1) (As
mentioned before, impedances depending on gain are called dynamic impedances,
Section 2.4.3.2).
The output impedance 𝑍o is obtained from (2.52) and (2.53) as
𝑟e (𝑟b + 𝑍S ) 𝑟 (1 + 𝛽) + (𝑟b + 𝑍S )
𝑍o = + 𝑟c × e . (2.60)
𝑟e + (𝑟b + 𝑍S ) 𝑟e + (𝑟b + 𝑍S )
An open-circuit input does not allow any input current to flow. Consequently, the out-
put impedance degenerates to a series connection of 𝑟c and 𝑟e . No dynamic action is
involved. A short-circuit input reduces the impedance to 𝑟c .
Due to its small value we disregard the reverse transfer, the reverse voltage gain
(ℎr ). Observe that the forward parameters (𝑍i and 𝑔𝑖 ) depend on the load impedance,
whereas, the backward parameters (𝑍o and 𝑔𝑣rev ) depend on the source impedance.
Two-ports are transparent in either direction – they do not isolate input from output.
Figure 2.30 shows the dependence of input impedance, current gain, voltage gain,
and power gain on the load impedance and the dependence of the output admittance
on the source admittance. For completeness, the corresponding dependences for the
common-collector and common-base configurations are included, as well, even if
these two configuration can be explained by feedback action as will be shown later
(Section 2.4.3.5).
2.2 Two-port network models | 51
107
c
106 102
c e
105
104 10
𝑍i(Ω)
𝐺𝑖
103
b
102
b 1
101
1 10−1
2 3 4 5 6 7
1 10 10 10 10 10 10 10 1 10 102 103 104 105 106 107
𝑅L (Ω) 𝑅L (Ω)
Fig. 2.30a. Dependence of input impedance on Fig. 2.30b. Dependence of the absolute values of
the load impedance for the common emitter, current gain on the load impedance for the com-
base and collector circuits. mon emitter, base, and collector circuits.
1
103 b 10-1
e 10-2 c
102
10-4
𝑌o(S)
𝐺𝑣
10 10-4 e
-5
c 10
1 b
10-6
10−1 10-7
1 10 102 103 104 105 106 107 10-7 10-6 10-5 10-4 10-3 10-2 10-1 1
𝑅L (Ω) 𝑌S (S)
Fig. 2.30c. Dependence of the absolute values of Fig. 2.30d. Dependence of output admittance on
voltage gain on the load impedance for the com- the source admittance for the common emitter,
mon emitter, base, and collector circuits. base, and collector circuits.
Some features of these figures deserve closer scrutiny. In the case of the common-
emitter circuit, the output admittance is practically independent of the source admit-
tance, and the input impedance is independent of the load impedance.
In the case of the common-collector circuit the output admittance is, in a central
region, proportional to the source admittance and the input impedance is, in a cen-
tral region, proportional to the load impedance. The voltage gain 𝑔𝑣c of the common-
collector circuit is for small load impedances 𝑔𝑣c = 𝑔𝑣e /(1 − 𝑔𝑣e ) ≈ −1 because of the
negative feedback action (Section 2.4.3.5) with a closed-loop gain of 𝑔𝑣e . The current
gain 𝑔𝑖c of the common-collector circuit is for geometric reasons for not too small load
admittances 𝑔𝑖c = −(𝑔𝑖e + 1).
52 | 2 Static linear networks
104
e
103
102
𝑔𝑝
c
1
10
1
1 10 102 103 104 105 106 107
𝑅L (Ω)
Fig. 2.31. Dependence of power gain on the load impedance for the common emitter, base, and
collector circuits.
Applying duality we find that the current gain 𝑔𝑖b of the common-base circuit is
for small load impedances 𝑔𝑖b = 𝑔𝑖e /(1 − 𝑔𝑖e ) ≈ −1 because of the negative feedback
action with a closed-loop gain of 𝑔𝑖e . The voltage gain 𝑔𝑣b of the common-base circuit
is for geometric reasons for not too small load resistances 𝑔𝑣b = −(𝑔𝑣e +1). And finally,
the common-emitter circuit offers for all burdens the highest power gain (Figure 2.31).
This feature qualifies it to be the basic transistor circuit, with the other two just local
feedback variants. Consequently, we do not spend any special effort with the common-
base and common-collector circuits but refer to Section 2.4.3.5 and later.
Using any three-terminal component of the other technologies gives, qualitatively,
the same answer, i.e, the common-cathode and the common-source circuits are basic,
the other two in each family can be explained by feedback action.
A two-port that does not contain an active element is a passive two-port. Passive two-
ports cannot provide power gain. In Section 2.1.3, we arranged resistors in series (and
in parallel) to discuss voltage (current) division. Such dividers as shown in Figure 2.32
are the simplest passive two-ports with more than one component.
Whereas active two-ports, in general, are unidirectional, i.e. it is self-evident
which port is the input port, passive two-ports, usually, are bidirectional. In addition,
the feed-through from the output to the input can be very severe. Therefore, it is wise
2.2 Two-port network models | 53
a b
𝑖i 𝑖o
𝑅1 𝑅1
+ +
𝑣i 𝑅2 𝑣o 𝑅2
− −
Fig. 2.32. (a) Voltage divider and (b) current divider presented as two-ports.
a b
𝑖i 𝑖o 𝑖i 𝑖o
𝑌2
+ + + +
𝑍1 𝑍1
𝑣i 𝑣o 𝑣i 𝑌1 𝑌1 𝑣o
2 𝑍2 2
2 2
− − − −
to investigate the property of a passive two-port with actual burden, i.e. to include the
load into all considerations.
A voltage divider inside a two-port is called attenuator. Returning to our statement
that electric power is the ultimate electrical variable of interest it should not surprise
that attenuators, usually, are made to divide (attenuate) electric power, which is, of
course, accompanied by voltage (and current) division.
2.2.7.1 Attenuators
If the transfer through a two-port has a power gain of less than 1, this is called atten-
uation or loss. Obviously, attenuation can be achieved with passive devices. In Sec-
tion 2.1.3.4, we have arranged two dissipative one-ports in series to form a voltage
divider network. Attenuators are needed when the source signal is too large to be prop-
erly handled by the consecutive device. Presently we place the two impedances into
a two-port as shown in Figure 2.32a which is also called L-section of an attenuator.
Without load its voltage attenuation equals that of the T-section (with 𝑍1 /2 = 𝑅1 and
𝑍2 = 𝑅2 ) shown in Figure 2.33a in which another impedance is added to make the
two-port symmetric.
54 | 2 Static linear networks
The (power) attenuation or loss is obtained from (2.21a) and (2.21b) using the 𝑧-
parameters of the T-section (which are symmetric)
𝑍1
𝑧i = 𝑧o = + 𝑍2 , and
2
𝑧r = 𝑧f = 𝑍2
by
𝑍22 𝑍ch
𝑔𝑝 = × . (2.63)
𝑍1 𝑍1 𝑍
(𝑍2 + + 𝑍ch ) ( + 𝑍2 ) × (𝑍ch + 𝑍2 + 1 ) − 𝑍22
2 2 2
Problems
2.40. A T-section of an attenuation ladder has 𝑍1 = 80 Ω and 𝑍2 = 60 Ω.
(a) Find the characteristic impedance.
(b) Give the voltage attenuation 𝑣o /𝑣i per section in dB if the load equals the charac-
teristic impedance.
To achieve both a given characteristic impedance and at the same time a specific at-
tenuation per section (expressed by 𝑘 = √1/𝑔𝑝 , the square root of the inverse power
gain), the following relations can be used:
𝑍1 𝑘−1
= 𝑍ch × (2.64)
2 𝑘+1
and
2𝑘
𝑍2 = 𝑍ch × . (2.65)
𝑘2 − 1
2.2 Two-port network models | 55
𝑖i 𝑖o
+
𝑍1 𝑍1 𝑍1 𝑍1 +
𝑣i 2 𝑍2 2 2 𝑍2 2 𝑣o 𝑅L = 𝑍ch
−
−
Fig. 2.34. Cascaded T-sections: the attenuation is additive if given in dB; the input impedance
equals 𝑍ch when terminated with a load resistance 𝑅L = 𝑍ch .
a b
+ +
𝑖D + +
𝑣i 𝑣o 𝑅L 𝑣i 𝑣o 𝑅L
− −
− −
Fig. 2.35. Signal modification by diode: (a) series configuration with 𝑖L = 𝑖D (b) parallel configura-
tion with 𝑣L = 𝑣D .
If diodes must be used in parallel, then the characteristic must be changed in a way
that the forward characteristics do not differ much. This can best be done by having
for each of them a (high enough) resistor in series.
Although any parameter of a two-port can be nonlinear, nonlinearity is usually
allotted to the transfer parameter. It is a good exercise to visualize various output sig-
nals after being nonlinearly modified in (simple) nonlinear two-ports. Observe that
there are usually twin configurations, e.g. Figure 2.35, with the nonlinear element in
series or in parallel to the output (load).
Figure 2.36a shows a resistor 𝑅 in series to a serial combination of a diode with an
(ideal) voltage source of 𝑉S and a load resistor 𝑅L at the output. Even if this circuit is
rather simple, it is good practice to simplify it. As viewed from the diode, 𝑅 and 𝑅L are
in parallel. Therefore, the circuit can be simplified as shown in Figure 2.36b. The new
input voltage 𝑣iTh = 𝑣i ×𝑅L /(𝑅+𝑅L ), and the new series resistor 𝑅Th = 𝑅×𝑅L/(𝑅+𝑅L ).
It is important, that 𝑣o , the quantity of interest is still available that it does not get
veiled. When the (biasing) voltage 𝑉S of the constant voltage source is zero and the
input voltage 𝑣i such that the diode is forward biased (i.e. positive), input current will
be short-circuited by the diode and the output voltage 𝑣o will be zero (or the value of
𝑉S , respectively).
If 𝑣i is reversed, the diode is an open-circuit (it may be left out) and the output
voltage is 𝑣o = 𝑣iTh = 𝑣i × 𝑅L /(𝑅 + 𝑅L) (voltage division, Section 2.1.3.4). Reversing the
polarity of diode and voltage source gives the analogous behavior for input voltages
of the opposite polarity. Figure 2.36c and 2.36d show the transfer characteristics for
both polarities.
Problems
2.43. Reverse in Figure 2.36a only the polarity of the constant biasing voltage source.
How does the transfer characteristic look?
2.44. Construct the transfer characteristic, if two anti-parallel diodes, each connected
to a suitable constant voltage source, are placed inside the two-port.
2.2 Two-port network models | 57
𝑅 𝑅Th
+ + + +
𝑣i 𝑣o 𝑅L 𝑣iTh 𝑣o
+ +
1V −
1V −
− − − −
Fig. 2.36a. Diode clipping circuit with the diode Fig. 2.36b. Simplified circuit diagram by combing
parallel to the output. the parallel resistors.
𝑣o 𝑣o
1.7 V
𝑣iTh 𝑣iTh
–1.7 V
Fig. 2.36c. Transfer characteristic under the as- Fig. 2.36d. As c) after reversing the polarity at the
sumption of a diode forward voltage of 0.7 V. diode and the constant voltage source.
Voltage limiting or clipping circuits can also be designed by having the clipping diode
in series to the load resistor. In this case, the constant voltage source, biasing the
diode, is a real source so that its impedance can be combined with the load resistor re-
ducing the number of elements to be considered. Figure 2.37b shows the simplified cir-
cuit diagram and Figure 2.37c the voltage transfer characteristic. Reversing the diode,
the constant (biasing) voltage, and the polarity of the input signal would result in the
transfer characteristic shown in Figure 2.37d.
Problems
2.45. Serial clipping diode.
(a) Is the diode in Figure 2.37a forward biased if the input signal voltage is larger than
the effective bias voltage?
(b) What is the minimum output voltage?
+ + + +
𝑅S 𝑅STh
𝑣i 𝑣o 𝑅L 𝑣i 𝑣o
+ +
𝑉S 𝑉STh
− −
− − − −
Fig. 2.37a. Diode clipping circuit with the diode Fig. 2.37b. Simplified diode clipping circuit using
in series to the output. a series diode.
𝑣o 𝑣o
𝑉STh
𝑣i 𝑣i
𝑉STh
Fig. 2.37c. Transfer characteristic of the circuit Fig. 2.37d. As (c) after reversing the polarity at the
shown in (a). diode and of the constant voltage source.
Another way of clipping signals (limiting voltages) with passive components is to use
Zener diodes. In Figure 2.38a, a circuit using one Zener diode is shown; in Figure 2.38b
two Zener diodes are arranged in series, with one of them reverse biased limiting both
voltage polarities symmetrically.
a b
𝑅S 𝑅S
Fig. 2.38. Voltage limiter circuits. (a) Using one Zener diode and (b) having two oppositely biased
Zener diodes in series.
2.2 Two-port network models | 59
𝑣o
𝑉B
𝑅Th 𝑉B
+
+
𝑣STh 𝑣o
− 𝑡
−
Fig. 2.39a. Diode discriminator with parallel Fig. 2.39b. Output signal if the signal is larger
diode (simplified circuit). than the bias voltage.
Problem
2.47. Construct the voltage transfer functions for the two cases shown in Figure 2.38
assuming a Zener voltage of 6.0 V each.
Reversing the diode in Figure 2.36 results in a parallel-diode discriminator. Figure 2.39a
shows the simplified circuit with the parallel resistors 𝑅L and 𝑅S combined to 𝑅Th =
𝑅L × 𝑅S /(𝑅L + 𝑅S) and the signal amplitude 𝑣S reduced to its effective value 𝑣STh =
𝑣S ×𝑅L/(𝑅L +𝑅S). Without signal, the output voltage 𝑣o equals the bias voltage 𝑉B . Only
when the effective signal amplitude 𝑣STh is larger than 𝑉B , the diode is reverse biased
and the portion of 𝑣STh that is above 𝑉B gets transmitted. This is shown in Figure 2.39b.
Inverting the diode in Figure 2.37a yields the serial-diode discriminator. In this
case, the effective bias voltage is 𝑉BTh = 𝑉B ×𝑅L /(𝑅L +𝑅S ) and 𝑅Th = 𝑅L ×𝑅S /(𝑅L +𝑅S ).
Without signal the output voltage 𝑣o equals the effective bias voltage 𝑉BTh . Only when
the signal amplitude 𝑣S is larger than 𝑉BTh the diode is forward biased and transmits
the portion of 𝑣S that is above 𝑉BTh . This is shown in Figure 2.40b.
𝑣o
𝑉BTh
𝑅Th
𝑉BTh
+
+
𝑣S 𝑣o
− 𝑡
−
Fig. 2.40a. Diode discriminator with serial diode Fig. 2.40b. Output signal if the signal is larger
(simplified circuit). than the effective bias voltage.
60 | 2 Static linear networks
Even if the scope of this book is not the construction of electronic circuits their design
requires the knowledge of the operational limits of electronic components. The value
pair of voltage and current applied to an electronic component at any instant is called
(instantaneous) operating point of that device. The operating point without a signal is
called quiescent or just operating point. It is established by biasing that device. Linear
passive devices (resistors, capacitors, inductors, and transformers) do not need a bias,
they are operational without a bias. The properties of nonlinear components change
with changing operating point. In these cases, it is essential to bias these components
correctly.
The instantaneous operating point can be split into two components, the quiescent value (set by
biasing) and the superimposed value from the signal.
There is no general advice of how to best set the operating point. The following con-
siderations will enter into the decision:
– idle (quiescent) power of the component
– choice of the optimum dynamic range
– optimization of parameters (e.g., higher currents will usually allow a faster re-
sponse; also the small-signal gain will depend on the operating point)
– stability of operation
– avoiding or taking advantage of nonlinearities.
The characteristic of an ideal resistor obeying Ohm’s law pretends to be a straight line
from minus infinity to plus infinity. Obviously no such device can exist. Solid-state
properties and gas discharge properties will limit the maximum possible voltage ap-
plied to a component because of voltage breakdown. In addition, there exists a maxi-
mum temperature up to which a device will function. Only in rare cases this will be the
melting temperature. It would rather be the ignition temperature, the recrystallization
temperature, etc. As it would be inconvenient to have a temperature measuring device
attached to each component, not only the maximum temperature is given as limit, but
also the electric power that would bring about that temperature when dissipated in the
device which is located in free air of standard atmospheric pressure at 25 ∘C. The bal-
ance between the quantity of heat produced by the dissipated power and the heat loss
to the surroundings fixes the device’s temperature within the thermal time constant
which is of the order of 10 ms. However, as no material is ideally uniform, the device
may be overheated locally so that current limits are given, too.
2.3 Real active two-ports (amplifiers) | 61
9 kΩ/0.3 W 1 kΩ/1 W
+
𝑣S
−
18 kΩ/1 W 2 kΩ/0.2 W
The operational limit of electronic devices is primarily the maximum allowed temperature.
Example 2.6 (Maximum power of resistors in a network). Figure 2.41 shows a network
consisting of one voltage source and four resistors. For the resistors not only the im-
pedance values but also the power ratings are given. The question to answer is what
the maximum voltage 𝑣Smax is so that none of the resistors gets a power overload.
As a first step an inspection of the circuit should reveal whether there is the possi-
bility to simplify it. As the question concerns all the resistors, it is not allowed to com-
bine resistors to make the circuit simpler. However, there is one simplification that can
be applied. Both voltage dividers divide in the ratio 1 to 2 so that, ideally, no current
will be flowing through the connection in the middle. It is superfluous and may be
left out. In general, connections that do not carry any current may be removed from
the circuit. The same would be true for components with only one terminal connected
unless they function as antenna. In the present case, it is useful to translate the max-
imum allowed power into maximum allowed current. Using 𝑝 = 𝑖2 × 𝑅 we get the
following ratings:
9 kΩ /0.3 W: 5.8 mA
18 kΩ /1.0 W: 7.5 mA
1 kΩ /1.0 W: 31.6 mA
2 kΩ /0.2 W: 10.0 mA
Considering that nine times more current flows through the right branch it is clear
that the resistor with 2 kΩ/0.2 W is the component that limits the maximum allowed
voltage to 𝑣Smax = 30 V.
62 | 2 Static linear networks
Problem
2.48. The load of a real voltage source (𝑉S = 110 V, 𝑅S = 1 kΩ/0.2 W) consists of a
potentiometer (20 kΩ/1 W) with its sliding contact loaded by a resistor of 20 kΩ/1 W.
(a) At which position of the sliding contact is the (current) load of the potentiometer
the largest?
(b) Which of the three resistive components is most likely to be overloaded, i.e. will
be thermally destroyed.
As detailed in Section 2.2.2, four parameters are needed to fully describe a linear two-
port. For real, i.e. nonlinear two-ports such parameters would be the result of lin-
earization at a specific operating point. To fully describe all properties of a nonlinear
two-port the dependence of each parameter on the input and output variables must
be given. In Figure 2.42 the dependences of the four hybrid parameters of a bipolar
junction transistor in the common-emitter configuration are given as an example.
For convenience the four dependences are shown in a combined figure of four
quadrants, one for each two-port property. On the right-hand side the reverse proper-
ties are shown, on the left-hand side the forward properties. As the hybrid parameters
are defined for a fixed output voltage and a fixed input current the appropriate one of
these variables is used as parameter.
The upper right quadrant is the output quadrant showing the dependence of the
output admittance ℎo = 𝑖o /𝑣o at selected input currents 𝑖i on 𝑣o and 𝑖o . The upper
left quadrant is the forward transfer quadrant. There the dependence of the short-
circuit current gain ℎf on 𝑖i and 𝑖o is given for one fixed output voltage 𝑣o . The lower
left quadrant is the input quadrant. There the dependence of the input impedance ℎi
on 𝑣i and 𝑖i at one fixed output voltage 𝑣o is displayed. The lower right quadrant shows
the dependence of the reverse transfer parameter ℎr at fixed input currents 𝑖i on 𝑣o and
𝑣i .
The only characteristic that is linear over most of its range is that of the (forward)
current gain. The input characteristic resembles that of the forward characteristic of
a junction diode. The reverse voltage gain in the lower right quadrant is so small that
we do not deal with it.
The most important quadrant is the output quadrant. For that reason it is often the
only quadrant shown, e.g., in data sheets. There are three regions to be considered.
The largest is the linear region where the individual characteristics are more or less flat
indicating a small output admittance, i.e. a high output impedance. Such a behavior
is expected for a current amplifier as which a bipolar junction transistor is usually
regarded. For small output voltages 𝑣o the characteristics become steep indicating a
low output impedance. This region is called region of saturation and is not used for
2.3 Real active two-ports (amplifiers) | 63
𝑖o (mA)
20
𝑣o = 4.4 V
15
𝑖i = 120 μA
10
𝑖i = 0 μA
𝑖i (μA)
200 100 2 4 6 𝑣o (V)
𝑣o = 4.4 V 𝑖i = 120 μA
0.7
𝑣i (V)
Fig. 2.42. Characteristics of a two-port based on hybrid parameters. Upper right quadrant: output
impedance, upper left quadrant: (forward) current gain (single characteristic only), lower left quad-
rant: input impedance (single characteristic only), lower right quadrant: reverse voltage gain.
linear amplification. The region below the characteristic with 𝑖i = 0, i.e. when a very
small current flows out of the two-port is called the cut-off region.
For completeness the correct definitions of these regions which are specific to
bipolar junction transistors are given in Table 2.3. To understand this table, we must
know that in a transistor there are two junctions, the base–emitter (B-E) and the base–
collector (B-C) junction that behave just like junction diodes, i.e. they may be forward
or reverse biased.
The last line of Table 2.3 lists a forth region which cannot be seen in the output
quadrant. Because of the symmetry between collector and emitter (as far as the junc-
tions are concerned) a transistor may be used reverse, i.e. with the collector terminal
used as if it were the emitter terminal and vice versa. Thus, an additional linear region
is obtained whereas the other two remain the same (as is obvious from the Table).
64 | 2 Static linear networks
The importance of the cut-off and the saturation regions lies in the low operating
power for operating points in those two regions. In the saturation region the voltage
is low, in the cut-off region the current. Therefore, bipolar junction transistors qualify
as switches, e.g., for applications in binary circuits (Chapter 5).
The term truth table is borrowed from the field of binary logic. With it all output situations for all
combinations of input variables are given.
Truth tables with the electronic states L and H give a unique description of binary cir-
cuits. Chapter 5 deals in more detail with truth tables.
OFF
ON
+
𝑣S 𝑅L
−
Fig. 2.43. Connecting a load resistor 𝑅L to an ideal voltage
source by means of a simple ON/OFF switch.
2.3 Real active two-ports (amplifiers) | 65
Table 2.4. Truth table of the electrical variables across the ON/OFF switch of Figure 2.43.
Table 2.4 shows that a ON/OFF switch is self-dual. If it is OFF (open-circuit), the
conductance is zero, the voltage H and the current L. If it is ON (short-circuit), the
resistance is zero, the current H and the voltage is L.
Mechanical switches have metallic contacts. Due to their elasticity they undergo
mechanical oscillations with a frequency on the order of 10−1 kHz when switched,
resulting in an intermittent contact. This behavior is called bouncing. This contact
bounce (also called chatter) is common to mechanical switches and relays. When
the contacts strike together, their momentum and elasticity act together to cause a
mechanical oscillation. Instead of a clean transition from zero to full conductance
multiple sequential contacts result. Using mercury-wetted contacts eliminates con-
tact bounce. In binary electronics (Chapter 5) bouncing generates spurious signals
which must be suppressed by appropriate circuits (e.g., monostable multivibrators,
Section 4.1.2.2).
Regular switches are actuators and not pure electronic devices. However, a relay
is an electrically operated switch.
Disregarding semiconductor power switches, we concentrate on electronic
switches in regular circuits that can be realized by three-terminal components. These
components can be so biased that their impedance is either H or L. For bipolar junction
transistors these regions of operation are called cut-off and saturation, respectively
(Table 2.3). For FETs they are called cut-off and linear. In demanding applications
MOSFETs are the best choice for an electronic switch because
– no current flows into the gate (very high input impedance),
– its ON resistance 𝑟ds (ON) is typically less than 1 Ω, and
– the cut-off resistance is that of a reverse biased PIN diode (e.g., very high).
Changing the gate voltage of a FET will alter the channel resistance 𝑟ds (ON). In this
mode the FET operates as a variable (a voltage-controlled) resistor. Then the FET op-
erates in the linear mode or ohmic mode, i.e. the drain current is proportional to the
drain voltage 𝑣ds . Note that negligible drain current (𝑖d ≈ 0) is not only achieved in
the cut-off regime, but it can also be achieved in the ohmic regime with 𝑣ds = 0.
66 | 2 Static linear networks
Problems
2.49. Why is a switch self-dual?
2.50. Which family of three-terminal elements has the best switching properties?
Figure 2.44b shows the load line in the field of the output characteristics of an ampli-
fier. In a class A amplifier the operating point is situated in the middle of this line. Sig-
nals as large as half the output range can be accommodated in either direction. How-
ever, the operating point (𝑉1/2 , 𝐼1/2 ) is at the position of the highest power consump-
tion (Section 2.2.4.1), i.e. without a signal the maximum power is consumed. Moving
2.3 Real active two-ports (amplifiers) | 67
a b
𝑣o 𝑖o
C’ C
𝑣omax 𝑖omax
B’ B
A
A
B
C B’
𝑣omin C’
𝑣i 𝑣o
𝑣omax
Fig. 2.44. Positions of the operating point of class A, B (and B’), and C (and C’) on a voltage transfer
characteristic (a), and on the load line in the output field (b).
the operating point either way makes the quiescent power 𝑃qui smaller
𝑃qui = 𝐼op × 𝑉op ≤ 𝐼1/2 × 𝑉1/2 (2.66)
If signals of only one polarity must be amplified, a position at either end of the out-
put range is better. The quiescent power becomes practically zero and the maximum
(unipolar) signal is practically twice as big. This would be a class B amplifier. Comple-
mentary semiconductor electronics allows the combination of two complementary B
amplifiers to amplify bipolar signals. The quiescent power is close to zero, the signal
power can be maximal.
If the operating point is outside the dynamic range of the amplifier, this is called
a class C amplifier. Then the lower part of an input signal is used up to put the mo-
mentary operating point into the dynamic range so that only that part (if any) of the
signal above a certain threshold (as determined by the position of the quiescent op-
erating point) is amplified. Such amplifiers are used as threshold amplifiers or biased
amplifiers.
𝑣o
𝑣omax
𝑣oop
𝑣i
𝑣iop 𝑣ithr
𝑉+ 𝑉+
𝑅1 𝑅2 𝑅1 𝑅2
+ 𝑖1 +
+
Q1 Q2 Q1 Q2 𝑣thr
+ +
𝑣o2 𝑣o
+ 𝑣o1
𝑣i1 𝑣i
𝑣i2 𝑅E 𝑖S
− − − − − −
𝑉− 𝑉−
Fig. 2.46a. Basic long-tailed pair circuit with two Fig. 2.46b. Long-tailed pair circuit used as a bi-
bipolar NPN transistors. ased amplifier with current source replacing the
high impedance resistor 𝑅E .
This circuit constitutes the basic version of the fully differential amplifier (Sec-
tion 2.3.5.1). As discussed there, a differential amplifier is used for amplifying any dif-
ference between the signals applied to its inputs. If the pair of transistors are identical
and balanced, a common-mode signal at the inputs will not cause a significant signal
at the output. Thus, the output of this circuit (which is taken across the collectors) for
a common-mode signal would be zero.
A difference between the base signals will be amplified by the transistors, and
will result in an output signal proportional to the difference between the two signals.
The differential voltage gain of this circuit is high, whereas its common-mode gain
is low. Thus this circuit is the standard input circuitry of operational amplifiers (Sec-
tion 2.3.5.2).
Let us switch to Figure 2.46b where the circuit is improved by substituting the high
impedance resistor by a current source. Such a current source can be realized by an
2.3 Real active two-ports (amplifiers) | 69
𝑣o
𝑉+
𝑣oH
+
+
𝑣i +
𝑉− 𝑣o
𝑣thr
𝑣oL
𝑣i
𝑣tl 𝑣tu − − −
Fig. 2.47a. Family of transfer functions; ampli- Fig. 2.47b. A bare operational amplifier acting as
fier, comparator, comparator with stable posi- comparator.
tive feedback, comparator with unstable positive
feedback (Schmitt trigger).
active current source circuit (Sections 2.5.6.1 and 2.5.6.2). Besides just one of the input
terminals is used, the other one is grounded (signal-wise). Connecting a pair of FETs
at the sources would be at least as good.
The difference between a fixed (threshold) voltage 𝑣thr and the input signal 𝑣i is
amplified but only if both transistors have at least a class B operating point. If 𝑣i is too
small, Q1 does not conduct and all the current 𝑖S of the constant current source flows
through Q2 resulting in a constant output voltage of 𝑉+ − 𝑅2 × 𝑖S . As soon as Q1 opens
the current 𝑖1 flowing through Q1 will show up as an increase 𝑣o of the output signal
𝑣o = 𝑖1 × 𝑅2 until 𝑖1 = 𝑖S at which point all the current will flow through Q1 and the
output voltage 𝑣o will coincide with 𝑉+ . This asymmetric version of a long-tailed pair
is a cascade of a common-collector circuit (Q1 ) and common-base stage (Q2 ).
2.3.4.2 Comparators
Increasing the gain of a threshold amplifier so much that the linear input range is less
than about 10−3 V has as consequence that the operating point at the output is either
in the lower L or in the higher H saturation region and hardly in the linear region
at all. By increasing the gain, e.g., by positive feedback (Section 2.4.2), this gap with
linear operation can be made very small. As a consequence small variation in the input
signal (or the reference voltage) will be sufficient to make the comparator switch from
L to H or vice versa. By increasing the positive feedback to a closed-loop gain ≥ 1
this gap is closed producing a hysteresis in the transfer function (Section 4.1.2). This is
illustrated in Figure 2.47a. In Figure 2.47b such a comparator is symbolized by a bare
operational amplifier.
Any standard operational amplifier with a well-balanced difference input and a
very high gain can act in open-loop configuration as comparator and can, therefore,
be used in applications with moderate requirements. When the noninverting input is
70 | 2 Static linear networks
at a higher voltage than the inverting input, the high gain of the op-amp causes the
output to saturate at the highest positive voltage (𝑣H ) it can supply. When the voltage
at the noninverting input is below that of the inverting input, the output saturates at
the most negative voltage (𝑣L ) it can supply.
Actually, most devices sold as comparators are Schmitt triggers (Section 4.1.2.1).
As shown in Figure 2.47a their transfer function has a hysteresis, a result of unstable
positive feedback, i.e. there does not exist a region with linear amplification. Thus, it
is ensured that the output is either L or H.
An amplifier may have (at least) two terminals for supplying the power, two terminals
as input port, and two as output port. Thus, starting out with six terminals we arrive
at the following family of amplifier configurations:
– fully differential amplifiers (push-pull differential amplifiers, with six leads),
– differential amplifiers (with five leads),
– push-pull amplifiers (with five leads),
– simple (four-terminal) amplifier, and
– active three-terminal devices.
All amplifiers need (at least) two terminals supplying the power for the operation.
However, the input and output ports which are independent in fully differential am-
plifiers may degenerate by grounding one terminal of either the output or the input
or both. The resulting family of amplifiers is displayed in Figure 2.48. By using the re-
maining output terminal for the supply of the operating power the simple amplifier
degenerates further to a three-terminal amplifier.
A standard operational (or operative) amplifier is a differential amplifier with volt-
age gain as characteristic transfer parameter. For our discussion we will degenerate
said standard operational amplifier by grounding the inverting input terminal arriving
at a simple voltage amplifier. This will be our standard building block for an amplifier
as symbolized in Figure 2.49.
Fig. 2.48. The five steps in the degeneration of a fully differential amplifier to an active three-
terminal device.
2.3 Real active two-ports (amplifiers) | 71
+ 𝑔𝑣 +
𝑣i 𝑣o
If the voltage gain of the two amplifiers is equal, this arrangement delivers an
output signal that is proportional to the difference of the input signals. Such an ar-
rangement has two drawbacks:
– it suffers from a small dynamic range because the power supply voltage 𝑉+ must
be larger than the product of each input voltage and the voltage gain, i.e. 𝑉+ >
𝑣i𝑥 × 𝑔𝑣𝑥 , and
– the common mode input signal 𝑣iCM is amplified in the same way as the differential
signal resulting in a common mode output signal 𝑣oCM = 𝑔𝑣 × 𝑣iCM . (The common
mode voltage is the mean of the two voltages)
𝑉+
𝑔𝑣1
+ +
𝑣o
𝑣i1
−
𝑔𝑣2
+
𝑣i2 Fig. 2.50. Difference amplifier made of two
− − standard voltage amplifiers.
72 | 2 Static linear networks
𝑉+
𝑔𝑣1
+ +
𝑣o
−
𝑔𝑣2
+
𝑣i1 𝑣float
𝑣i2
half to each (assuming components with identical electrical properties). The decisive
point is the following: Changes in the two amplifiers are only possible if their individ-
ual operating current changes by the same amount but in the opposite direction. This
only happens when (small) voltage signals of opposite sign are applied to the inputs
of the individual amplifiers, as is the case. Because of the floating voltage 𝑣float , the in-
put voltages 𝑣i𝑥 are reduced to effective input voltages 𝑣i𝑥eff = 𝑣i𝑥 − 𝑣float . The floating
voltage 𝑣float equals the common mode input voltage 𝑣iCM , making |𝑣i1eff | = |𝑣i2eff | and
of opposite sign.
𝑣i1 + 𝑣i2
𝑣iCM = 𝑣float = (2.68)
2
Substituting in above exercise 𝑣i𝑥 by (𝑣i𝑥 − 𝑣float ) one gets 𝑣o = (𝑣i1 − 𝑣i2 ) × 𝑔𝑣 as
before. However, contrary to before the common mode output voltage 𝑣oCM equals now
the common mode input voltage 𝑣iCM so that the dynamic range is greatly increased
because the common mode input voltage does not get amplified. The common mode
gain is 1!
For this ideal model the common-mode-rejection ratio (CMRR), i.e. the ratio of the
signal gain 𝑔𝑣 to the common mode gain, 𝑔𝑣CM equals 𝑔𝑣 . Such a circuit, when built,
has two weak spots:
– The current delivered by the current source is somewhat dependent on the com-
mon mode input voltage.
– The two voltage amplifiers would not have identical properties, making the com-
mon mode voltage at the output different from that at the input.
In addition, the parameters are subject to drift. Any changes in the parameters of the
two amplifiers would have no effect, as long as they are equal. Changes due to tem-
perature changes can be minimized by thermally coupling the two input elements,
so that a temperature change is common to both. Thus, thermal drift is, to a good
part, suppressed by common-mode rejection. Drift due to aging (microscopic struc-
tural changes in semiconducters) can be an issue, too. Other reasons for a change in
2.3 Real active two-ports (amplifiers) | 73
the (common-mode) output voltage are changes in the supply voltage, in the burden
and electronic noise.
The most important application of a fully differential amplifier is as input stage of
operational amplifiers.
Problem
2.52. A fully differential amplifier has a common-mode gain of 𝑔𝑣CM = 1 and a signal
gain of 𝑔𝑣 . How much is the dynamic range larger than it would be if the CMRR = 1?
Practical devices that are moderately complex circuits can only approximate this ideal
model. The practical limitations of operational amplifiers can, more or less, be ignored
when they are used with negative feedback to perform as operation amplifiers (Sec-
tion 2.5.1).
How close does a practical operational amplifier come to its ideal model?
Finite voltage gain Although high gains can be achieved (> 106 ) it is not infinite.
High open-loop gains will require special precautions to maintain amplifier sta-
bility.
Nonlinear transfer function If the voltage gain is not constant in the signal range
of interest, the output voltage will not be accurately proportional to the difference
between the input voltages. This effect can be disregarded, except for saturation
effects which strongly reduce the loop gain, by using ample negative feedback.
The maximum output voltage will be slightly less than the power supply voltage.
Therefore, only some limited input voltage can be handled.
Finite common mode rejection ratio The ideal common-mode-rejection ratio
(CMRR) is infinite. In applications with a noteworthy common-mode input voltage
(e.g., in noninverting operation amplifiers, Section 2.5.2) operational amplifiers
with high CMRR must be used.
Nonzero input admittance Although very low values can be obtained using FETs
the admittance is not zero. As a consequence input bias current (typically 10 nA
for bipolar and several pA for CMOS circuits) flows into the inputs. Unfortunately,
this current is mismatched between the inverting and noninverting inputs induc-
ing the need of an input off-set current to balance the output. This fact results
in an input offset voltage/current making the output voltage zero for a zero input
signal.
74 | 2 Static linear networks
Nonzero output impedance To achieve low output impedance a relatively high qui-
escent current of the output stage is required. But even then the output impedance
is not zero. In addition, the amount of current (and power) that can be drawn at
the output is limited. Most operational amplifiers are designed to drive load resis-
tances down to 2 kΩ.
Temperature effects (drift) The values of all parameters depend on temperature.
Temperature drift of the input offset voltage is particularly important.
In Section 2.2.1, we have come across the reverse transfer in two-ports. It takes care of
the effect of the output signal on the input signal. In general language this would be
called feedback action of the output on the input.
Self-regulating mechanisms are wide-spread, e.g. in biology. All senses in living
creatures are designed to provoke a response depending on the signal received by the
senses. Also evolution is supposed to take advantage of feedback. Feedback is a gen-
eral phenomenon found nearly anywhere in human every-day life. For example, driv-
ing a car makes use of the visual feedback of relevant information concerning traffic,
the lay-out of the road, etc. The existence of feedback in human relations is obvious.
Teaching thrives on feedback from students. Also the trial-and-error method takes
fully advantage of some feedback action. Numerous feedbacks in economics and fi-
nance are known, e.g., if the gasoline price at the gas-filling station depends on supply
and demand, raising the price reduces the demand resulting in a lower price. Without
further deliberation it should be clear that the information fed back can decrease or
increase the quantity under consideration, e.g., the gasoline price may go up or down,
dependent on the information received. This is called positive and negative feedback.
In electronic engineering, feedback is one of the most powerful tools available.
In mixed systems feedback is used to control mechanical, optical, thermal and other
physical processes.
Figure 2.52 depicts a symbolic general feedback loop. Although this is a very com-
mon presentation of feedback it is not at all typical. Its merit lies in the presence of
a loop which pleases visually oriented people. In electronics, as we will show later,
it is not always possible to isolate a (geometric) feedback “loop.” Besides, in every-
day life it would not be unusual to have several feedback “loops” working in parallel.
However, this figure is helpful in defining some important properties of feedback.
Let the forward transfer property of the main element A be 𝐴, i.e.
𝑠oA = 𝐴 × 𝑠iA ,
2.4 Static feedback | 75
AF
𝑠oB 𝑠iB
B
𝑠iA 𝑠oA
𝑠i + A 𝑠o
𝑠oB = 𝐵 × 𝑠iB .
No other properties are assigned to these elements. In nearly all cases of electronic
feedback A would contain at least one operational active element and B only passive
ones. Hence the symbol A.
When combining these two elements as done in Figure 2.52 the input of B is de-
rived from the output of A, and the output of B must be added to the input signal 𝑠i to
add up to 𝑠iA :
𝑠iB = 𝑠oA
𝑠iA = 𝑠i + 𝑠oB
𝑠o = 𝑠oA .
The union of these two elements results in a new element with (internal) feedback,
called AF . Let us call its (forward) transfer property 𝐴 F , with
𝑠o 𝑠iA × 𝐴 𝐴
𝐴F = = = . (2.69)
𝑠i 𝑠iA − 𝑠iA × 𝐴𝐵 1 − 𝐴𝐵
Obviously, the combined element AF is bidirectional, it has in addition a reverse trans-
fer property 𝐵F
𝐵
𝐵F = . (2.70)
1 − 𝐵𝐴
Even if the individual elements A and B transfer only into their forward direction,
the element with feedback transfers either way. This fact by itself makes it clear that
feedback does not change the properties of the element A, but results in a new element
AF with properties different from the single element (without feedback).
76 | 2 Static linear networks
Applying feedback to an element does not change the properties of said element but gives rise to
a novel system with different properties.
However, observe that AF is of the same nature as A. This can be recognized in (2.69)
by the fact that they have the same dimension. The dimensionless product 𝐴𝐵 is
called forward (closed-) loop gain, whereas 𝐵𝐴 is called reverse (closed-) loop gain.
The closed-loop gains are the characteristic properties of each feedback loop. Until
later we concentrate on the forward properties. There are three cases to be considered:
– 𝐴𝐵 > 0: positive feedback,
– 𝐴𝐵 = 0: no feedback, and
– 𝐴𝐵 < 0: negative feedback, making 𝐴 F < 𝐴.
Actually, we will see that the term 1 − 𝐴𝐵 showing up in the denominator which is
called (forward) return difference really defines the feedback properties.
Problems
2.53. Determine the (forward) transfer property 𝐴 F of the three feedback configura-
tions shown in Figure 2.53.
2.54. By applying negative feedback of the amount 𝐵 to an active element with the
property 𝐴 the transfer property 𝐴 F should be made 1/2.
(a) How can this be done?
(b) What is peculiar with this solution?
2.55. How does feedback affect the intrinsic properties of elements inside the feed-
back loop?
For negative feedback (𝐴𝐵 < 0), it follows from (2.69) that |𝐴 F | < |𝐴|. This means
that the transfer value is reduced by negative feedback. From (2.69) one gets
1
𝐴F = . (2.71)
1
−𝐵
𝐴
Therefore, the transfer property 𝐵 must have the inverse dimension of 𝐴 as already
evidenced by the fact that the product 𝐴 × 𝐵 is dimensionless. For |𝐴| ≫ 1/|𝐵| above
equation, is reduced to
1
𝐴F ≈ − . (2.72)
𝐵
2.4 Static feedback | 77
B1 B2
𝑠i + A1 A2 𝑠o
B
B1
𝑠i + + A1 A2 𝑠o
B1 B2
𝑠i + A1 A2 + A3 A4 𝑠o
B3
Fig. 2.53. Feedback configurations from Problem 2.53.
Thus, for a closed-loop gain |𝐴𝐵| ≫ 1, the transfer value 𝐴 F becomes de facto inde-
pendent of 𝐴. It is practically entirely dependent on the inverse of the transfer value 𝐵
of the feedback element. With 𝐵 made of stable and linear elements there are the fol-
lowing consequences for the quality of 𝐴 F as compared to 𝐴:
– the stability of is improved,
– the linearity of the transfer function is improved, and
– noise from inside the element A is reduced as much as the signal.
78 | 2 Static linear networks
The de facto sole dependence of 𝐴 F on 𝐵 is the most important principle used in the design and
fabrication of linear amplifiers.
a b
B1 B1 B
𝑠i + A + A 𝑠o 𝑠i + A A 𝑠o
Fig. 2.54. Feedback over two stages. (a) Accomplished by local feedback loops and (b) by one global
feedback loop.
2.4 Static feedback | 79
Since for negative feedback 𝐴 F /𝐴 is less than 1, the global feedback depends less on
changes of 𝐴 than the local feedbacks.
The best feedback results are obtained by making the feedback loop as large as feasible.
𝑠o
𝜖(𝑠o )
𝑠i
Fig. 2.55. Response function and ideal linear response of an amplifier as well as deviation 𝜖(𝑠o ) from
linear response as a function of the output signal 𝑠o .
80 | 2 Static linear networks
+ A1 + A2 𝑛o
up by it must be converted into equivalent input noise. In Figure 2.56 the position
of the noise source is indicated by the appropriate splitting of A into A1 and A2 with
𝐴 1 × 𝐴 2 = 𝐴. Some noise 𝑛 generated directly after A1 has the same effect as an input
noise 𝑛i = 𝑛/𝐴 1 . This is a property of A1 and consequently independent if there is
feedback or not. The noise signal 𝑛o at the output is given in the no-feedback case by
𝑛o = 𝑛 × 𝐴 2 (2.79)
Problem
2.57. Make it plausible to yourself why stable positive feedback behaves like an infi-
nite geometric series.
What has been said on the effect of negative feedback can be taken over for posi-
tive feedback taking into account the opposite sign of the (closed) loop gain. Sta-
bility and linearity get worse. Where is then the benefit of stable positive feedback?
From (2.81) it should be clear that the transfer value 𝐴 F can be made arbitrarily large.
For this reason there can be some benefit in applying stable positive feedback lo-
cally.
In electronics, both the active element A and the feedback element B are two-ports.
As a consequence there are four possible feedback arrangements of the two elements
(Figure 2.57). The input terminals can be in parallel or in series and the output termi-
nals as well. Based on the geometric arrangements of the inputs and the outputs of
the two-ports (see also Section 2.2.2) we arrive at the following four types of feedback:
– parallel–parallel feedback,
– series–parallel feedback,
– parallel–series feedback, and
– series–series feedback.
There exist four alternative names for these feedback configurations based on the ef-
fective electrical quantities. In this case, the names reflect the feedback action, namely
from output to input. As voltage is sensed in parallel and added in series, whereas cur-
rent is sensed in series and added in parallel the above types of feedback are called
– voltage–current feedback,
– voltage–voltage feedback,
– current–current feedback, and
– current–voltage feedback.
Sometimes just the input is considered and only the terms current feedback and volt-
age feedback is used. Although the closed-loop gain 𝐴𝐵 is a dimensionless ratio, it
is for the two current-feedback cases a current gain and for the two voltage-feedback
cases a voltage gain.
Although (closed) loop gains are dimensionless they are either a current gain or a voltage gain.
Consequently, one speaks in short of current or of voltage feedback, respectively.
82 | 2 Static linear networks
B B
A A
B B
A A
The type of forward transfer of each of these four configurations is characteristic for
them, i.e. feedback acts directly only on them. They are, in preserved sequence,
– 𝑟m , the transresistance (or more general transimpedance),
– 𝑔𝑣 , the voltage gain,
– 𝑔𝑖 , the current gain,
– 𝑔m , the transconductance (or more general transadmittance).
circuit elements that have no impact on the closed-loop gain do not take part in the feedback
action.
2.4 Static feedback | 83
−𝑔𝑖A ×𝑖i
𝑖S 𝑅L
𝑅F
Fig. 2.58. Feedback circuit
from Problem 2.58 and
Problem 2.59.
Problems
2.58. Feedback is applied to an ideal current dependent current source with a current
gain 𝑔𝑖A according to Figure 2.58.
(a) What is the current gain 𝑔𝑖F of the feedback circuit?
(b) What is the voltage gain 𝑔𝑣F of the feedback circuit?
2.59. The output of a feedback two-port lies in series to the output of the active two-
port (Figure 2.58).
(a) Does the load resistor take part in the feedback action?
(b) Does shunting the load resistor by a short-circuit cancel the feedback action?
𝑖oB 𝑖iB
𝑖iF
+ +
𝑖iA ℎiA ℎfA ×𝑖iA
𝑣iF = 𝑣iA + 𝑣oF = 𝑣oA
ℎrA ×𝑣oA 1/ℎoA
−
A
− −
By now, it should not surprise that a circuit responds to the actual impedance
which is not necessarily the static impedance, e.g., as given in data sheets. Conse-
quently, any comparison of circuits must be made under the same electrical burden to
make results comparable.
Taking advantage of duality it suffices to investigate just one configuration at the
input and one at the output. Figure 2.59 shows two two-ports in parallel–parallel con-
figuration with hybrid parameters.
The input impedance 𝑍iA of A is given as
𝑣iA ℎiA 𝑖iA + ℎrA 𝑣oA
𝑍iA = = = ℎiA + ℎrA × 𝑍iA × 𝑔𝑣A (2.83)
𝑖iA 𝑖iA
after inserting the factor 𝑍iA × 𝑖iA /𝑣iA which is identical to 1 and using 𝑔𝑣 = 𝑣oA /𝑣iA .
The input impedance 𝑍iF of the arrangement with feedback is given as
∗ ∗
𝑣iF ℎiA 𝑖iA + ℎrA 𝑣oA ℎiA + ℎrA × 𝑍iA × 𝑔𝑣A
𝑍iF = = 𝑖
= (2.84)
𝑖iF 𝑖iA × (1 + 𝑖oB ) 1 − 𝐴𝐵
iA
the common-mode-rejection ratio (CMRR) of the amplifier in use (Section 2.3.5.1). This
is the reason behind the rule of thumb “always invert, except when you can’t” mean-
ing “have a negative feedback with a parallel configuration at the input to bring about
a node where the voltage hardly changes at all, i.e. with a small impedance (toward
ground).” A virtual ground may be at any voltage. However, changing the current in
the node will not change the voltage due to its very small impedance. Due to the un-
avoidable nonlinearity (reducing the closed-loop gain) the small impedance will be
limited to not too large currents. Nodes with virtual ground are very convenient refer-
ence points in circuit analysis due to their practically invariable voltage.
Virtual ground is a node with (very) small impedance so that the voltage swing due to a current
swing is negligible in comparison to other voltages in this circuit.
One terminal of an ideal voltage source with the other terminal connected to ground
is such a(n ideal) virtual ground. The voltage does not change. It is independent of the
current flow.
By applying duality we arrive at the virtual open-circuit. It can be represented by
an ideal current source which provides an invariable current, independent of the ap-
plied voltage (Section 2.3.5.1). Consequently, the voltage across an ideal current source
is said to be floating.
A virtual open-circuit is a terminal pair with a very small conductance so that the current swing due
to a voltage swing is negligible.
Duality also changes the current loop gain of the parallel configuration into a voltage
loop gain of the serial arrangement.
The reverse “input” impedance, i.e. the output impedance, usually, does not get
the attention it deserves. Obviously, when viewing from the back, A and B exchange
their function, i.e. the subscript A must be replaced by B in (2.84) and the reverse loop
gain 𝐵𝐴 must be used as loop gain.
−𝑖oA
= 𝐵𝐴 (2.89)
𝑖iB
86 | 2 Static linear networks
the reverse (closed) loop gain. The compact answer for a parallel configuration for
which the admittance comes natural is
Note: Clearly, the forward closed-loop gain 𝐴𝐵 depends on the impedance at the out-
put whereas the reverse closed-loop gain depends on the impedance at the input.
Their appropriate values must be obtained from the actual closed-loop configurations.
Problem
2.60. A floating voltage (e.g., the voltage across an ideal current source) is cut in half.
How much does the current change?
𝑖iF 𝑖oF
+ +
𝑣iF 𝑣oF
Fig. 2.60. Parallel–parallel negative feed-
A back with one resistor in the feedback
− −
two-port.
2.4 Static feedback | 87
+ −𝑔𝑣 +
𝑣i 𝑣o
Fig. 2.61. An ideal voltage amplifier with a
− − parallel–parallel feedback.
its voltage gain 𝑔𝑣 , an infinite input impedance, zero output impedance and no reverse
transfer, i.e. just one nonzero parameter (𝑔𝑣 ) is needed for its description.
In the case shown in Figure 2.61 all the input current flows through 𝑌 (with 𝑌 =
1/𝑍) which results in an output voltage 𝑣o of 𝑣o = 𝑣i − 𝑖i × 𝑍. With 𝑣o = 𝑣i × 𝑔𝑣 the
input admittance is given by
𝑖i
𝑌i = = 𝑌 × (1 − 𝑔𝑣 ) . (2.92)
𝑣i
The following cases must be considered:
– If 𝑔𝑣 > 1, it is a case of negative admittance (impedance). A positive input voltage
results in an input current flowing out of the circuit.
– If 0 < 𝑔𝑣 ≤ 1, such a circuit is called bootstrap circuit. The input current is smaller
than expected from applying Ohm’s law on 𝑍 itself. For 𝑔𝑣 = 1, there is no voltage
drop on 𝑍 because 𝑣o = 𝑣i , i.e. 𝑖i = 0 and 𝑌dyn = 0.
– If 𝑔𝑣 < 0, the so-called Miller effect increases the admittance of 𝑍 accordingly.
This exemplifies how admittances arranged in parallel to an amplifier are made dy-
namic.
Duality considerations
Without much ado, we can apply duality to the above case (Figure 2.63). The input
current of a series–series feedback flows through the feedback impedance 𝑍 which is
dynamically changed to 𝑍i = 𝑍 × (1 − 𝑔𝑖 ). The following cases must be considered:
– If 𝑔𝑖 > 1, it is a case of negative impedance.
– If 0 < 𝑔𝑖 ≤ 1, such a circuit is dual to the bootstrap circuit.
– If 𝑔𝑖 < 0, the impedance is increased accordingly.
In the previous case, positive feedback (i.e. a positive open-loop gain) is cause of neg-
ative admittance, in the present case of negative impedance. The i-v-characteristic of
the dynamic impedance has an N-shape (Section 2.1.6). This can easily be found out
88 | 2 Static linear networks
10 kΩ
1 kΩ
−
𝑔𝑣 +
+
𝑅V 𝑣o
V
by starting at the origin. The characteristic through the origin must be that of 𝑍, be-
cause for 𝑖 = 0 the amplifier does not work, i.e. 𝑔𝑖 = 0. With increasing 𝑔𝑖 , the slope
of the characteristic becomes smaller, becoming zero with 𝑔𝑖 = 1. Increasing the cur-
rent gain further causes the negative slope, the negative impedance. At the end of the
dynamic range, the current gain decreases. When it becomes one, the characteristic
is flat again and from then on the impedance approaches the value of 𝑍 again.
The characteristic of the dynamic admittance dealt with just before has an N-
shape, too, as voltage is used as dependent variable. However, when, as usual, current
is used then the characteristic of the dynamic impedance has an S-shape as expected
because the S-shape is dual to the N-shape. Above deliberation supports the use of
conductance as an independent property and not just as reciprocal resistance.
Example 2.7 (Measuring the voltage across a dynamic impedance). Figure 2.62 shows
an inverting operation amplifier. The voltage at the input of the operational amplifier
shall be measured. As discussed in Section 2.1.4 the loading of the circuit by the mea-
suring instrument requires a correction of the measured value to obtain the correct
value (without loading). If the correction is below a given level, one may disregard it
saving time and effort. In the present case this level is assumed to be 1%. What mini-
mal impedance 𝑅V must the voltmeter have so that the correction will be not more than
1%? As a first step the supposedly linear network is replaced by a real voltage source.
At this point we note that the absolute voltage of this source need not be determined
as we are only concerned with the relative voltage change. At the input of the opera-
tional amplifier three impedances are in parallel, the impedance of the 1-kΩ resistor,
that of the 10-kΩ resistor and the input impedance of the amplifier. The last, usually,
is so high that it may be disregarded. Thus the impedance of the real voltage source
is given by the parallel combination of just two impedances. That of the 1-kΩ resistor
is just 1 kΩ because it is parallel to the voltage to be measured. (The signal voltage
source can be replaced by a short-circuit according to the rules of the superposition
theorem.) However, the impedance of the 10-kΩ resistor is dynamically changed as
discussed above. The second terminal of this resistor is not connected to ground but
to the output of the amplifier that has a voltage gain 𝑔𝑣 = −1000. Consequently, its
impedance as seen from the input is 𝑍10k = 10 000/1001 which is about 10 Ω. There-
fore, the impedance of the real voltage source is due to the 1-kΩ resistor in parallel a
2.4 Static feedback | 89
𝑖i 𝑖o
+ +
𝑣iF 𝑣oF
𝑧i 𝑧o
+ +
𝑧r ×𝑖o 𝑧f ×𝑖i
− −
− −
Fig. 2.63. Series–series negative feedback with one conductor in the feedback two-port.
little smaller, namely 𝑍Th = 9.89 Ω. Thus, a voltmeter with an impedance of 979 Ω
would load the circuit so little that the correction would amount to 1%.
Now let us investigate the conductance of 𝑌 in Figure 2.60 viewed from the output.
Any signal 𝑣o at the output is necessarily accompanied by a signal 𝑣i = 𝑣o /𝑔𝑣 at the
input. Thus the voltage across 𝑌 is 𝑣o − 𝑣o /𝑔𝑣 . Consequently, the conductance 𝑌o of
𝑌 as viewed from the output is 𝑌o = 𝑌 × (1 − 1/𝑔𝑣 ). We should now understand the
difference between the bare conductance 𝑌 measured between the two terminals of
the component and the two dynamic conductances 𝑌i and 𝑌o measured at each port
against ground. Although the conductance of 𝑌 does not change by including it into a
feedback loop the feedback action generates dynamic values when measured against
the reference (ground).
Using an impedance 𝑍 as series–series feedback (Figure 2.63) with an (ideal)
current amplifier the dual circuit to that of Figure 2.60 is obtained, giving dynamic
impedances: 𝑍i = 𝑍 × (1 − 𝑔𝑖 ) and 𝑍o = 𝑍 × (1 − 1/𝑔i ). Although such a circuit has
very little importance because current amplifiers are much less common than voltage
amplifiers the appropriate configurations are shown in Figure 2.63 as an exercise in
duality. In this case 𝑧-parameters are optimum because two two-ports are connected
in a series–series configuration as shown in Figure 2.63. The feedback two-port con-
tains just one impedance 𝑍. Then 𝑧iB = 𝑧rB = 𝑧fB = 𝑧oB = 𝑍.
𝑉op
1 kΩ
−
𝑔𝑣 +
1Ω +
𝑣o
μA
− Fig. 2.64. Circuit from Problem 2.63 (𝑔𝑣 = 1000).
Problems
2.61. Does a resistor change its value, i.e. change its impedance when inserted into a
feedback loop?
2.62. Investigate the impedance of the following one-port: the port with the voltage 𝑣
is shunted by a resistor 𝑅 and in parallel to it is a real dependent voltage source with
a source resistor 𝑅S and a voltage of 𝑣S = 𝑣 × 𝑔𝑣 .
(a) Determine the (dynamic) impedance of the one-port.
(b) Under which condition is the admittance 𝑌 = 1/𝑍 zero?
2.63. In Figure 2.64 the dark current of a photo cell is measured by an ammeter having
an impedance of 1 Ω. The reading is 1 μA.
(a) Apply the loading corrections to find the actual dark current.
(b) What output voltage do you expect with the ammeter removed?
𝑖i 𝑖o
+ +
𝑔r ×𝑖o 𝑔o
𝑣i + 𝑔f ×𝑣i 𝑣o
1/𝑔i
−
− −
Fig. 2.65. Parallel–parallel feedback with a feedback two-port B applied to an active two-port A.
Inverse hybrid parameters are chosen to comply with the voltage gain of the amplifier.
𝑍L
𝑣i × 𝑔f ×
𝑣 𝑔o + 𝑍L 𝑔f
𝑔𝑣 = o = = 𝑔 . (2.93)
𝑣i 𝑣i 1+ o
𝑍L
With feedback (finite impedance of 𝑅) the output is loaded by 𝑍∗L consisting of
the parallel configuration of 𝑍L and the (dynamic) impedance 𝑍dynR of the feedback
resistor 𝑅 as viewed from the output. As discussed above
𝑅
𝑍dynR = (2.94)
1
1−
𝑔𝑣
so that the equivalent load impedance 𝑍∗L becomes
1
𝑍∗L = . (2.95)
1 1 − 1/𝑔𝑣
+
𝑍L 𝑅
Thus the decrease of the voltage gain when connecting the feedback resistor is solely
the effect of additional loading of the amplifier and not due to feedback action.
As a stringent test, make 𝑔o in (2.93) equal zero. Such a choice makes the active
two-port closer to ideal so that an improved feedback circuit can be expected if at
all. In that case the loading of the active two-port has no effect and consequently the
voltage gain of the two two-ports, with and without feedback, is identical.
Besides, all these considerations are completely unnecessary if you just stick to a
simple fact (stressed before). Any electronic component retains it properties (at a given
92 | 2 Static linear networks
operating point, i.e., with the same loading!) independently of its use. Both the input
and the output voltage of the circuit with feedback are the same as without feedback,
therefore, their ratio necessarily stays the same.
Negative feedback only decreases the characteristic transfer parameter. For a
parallel–parallel configuration this is the transimpedance 𝑟m
𝑣o 𝑖 𝑣
𝑟m = = 𝑍i × i × o = 𝑍i × 𝑔𝑣 . (2.96)
𝑖i 𝑣i 𝑖i
The voltage gain 𝑔𝑣 does not change through the feedback action, as discussed, so
that
𝑍i
𝑟mF = 𝑍iF × 𝑔𝑣 = × 𝑔𝑣 (2.97)
1 − 𝐴𝐵
yielding
𝑟m
𝑟mF = . (2.98)
1 − 𝐴𝐵
The return difference is the factor by which negative feedback reduces the charac-
teristic transfer parameter. This was expected from the general properties of feedback
(Section 2.4.1).
𝑉+
𝑅C 𝑅C
𝑖o1 𝑖o2
Q1 Q2
+
+
𝑣i1
𝑣i2
2𝑅E 2𝑅E
Fig. 2.66. Common mode signal
− −
𝑉− applied to a long-tail pair.
2.4 Static feedback | 93
Thus we have two symmetric transistor stages Q1 and Q2 , with equal collector 𝑅C
and emitter resistors 2𝑅E . It is obvious that a signal applied to the bases of both tran-
sistors will result in the same voltage at the emitters of both transistors if the pairs of
transistors and resistors have identical properties each. As the voltages are equal no
current will flow through the connection between the resistors so that this connection
may be removed without disturbing the values of the electrical variables.
The resistor 2𝑅E at the emitter of the transistor performs a negative series–series
feedback. The output current builds up a voltage by flowing through 2𝑅E that counter-
acts the input voltage so that we have a current–voltage feedback also called series–
series feedback. As we know from Section 2.4.3.1 such a feedback changes the input
impedance from 𝑍iA to 𝑍iF = 𝑍iA × (1 − 𝐴𝐵).
With 𝑔𝑣A = 𝑔𝑖A × 𝑅C /𝑍iA and 𝑔𝑣F = 𝑔𝑖A × 𝑅C /𝑍iF the common-mode-rejection
ratio becomes
𝑔
𝐶𝑀𝑅𝑅 = 𝑣A = (1 − 𝐴𝐵) (2.99)
𝑔𝑣F
with 𝐴𝐵 = −(𝛽 + 1) × 2𝑅E /𝑍iA . In practical circuits a constant current source with its
very high output impedance replaces 𝑅E providing optimal 𝐶𝑀𝑅𝑅.
Problems
2.64. Series–series feedback
(a) Does a series–series feedback act directly on the current gain?
(b) Does a series–series feedback act directly on the transimpedance?
(c) Does a series–series feedback act directly on the voltage gain?
(d) On which transfer property does a series–series feedback act directly?
2.65. The voltage gain in an amplifier varies by 40%. Which kind of negative feedback
must be applied using how much closed-loop gain to reduce the variations to 1%?
2.66. The transfer function of a voltage amplifier deviates by up to 12% from linearity.
Which kind of feedback must be applied using how much closed-loop gain to reduce
the nonlinearity to less than 1%?
2.67. In Figure 2.67, the feedback loop of a parallel–parallel feedback can be opened
by means of a switch. How does the voltage gain change when this switch is closed
terminating the open-loop condition?
11 kΩ
+ −
𝑔𝑣 +
𝑣i + 𝑣o
− − Fig. 2.67. Feedback loop of Problem 2.67 (𝑔𝑣 = 1000).
94 | 2 Static linear networks
Forward quantities
Remember: With negative feedback impedances become smaller if the configuration is in parallel,
larger when in series.
If in doubt what kind of feedback is acting, here are simple tests to find out.
Example 2.9 (All four types of feedback configurations in one circuit). Figure 2.68
shows a circuit in which, depending on the choice of the input and output terminals
all four feedback configurations can be found. As we already know, the easiest way to
2.4 Static feedback | 95
𝑉+
𝑅1 𝑅2
𝑍F
3 4
1 Q1 Q2 2
recognize whether the feedback is in parallel or in series at a port of the active element
is to apply a short-circuit at that port. If this short-circuit nullifies the feedback action,
one has a parallel configuration. Applying this recipe to the circuit of Figure 2.68 we
get the answers listed in Table 2.6.
If a short-circuit at a port nullifies the feedback action, parallel feedback is present at that port.
Problem
2.68. Verify the findings of Table 2.6 for the four feedback variants.
a b
𝑅F 𝑖o 𝑖o
+ +
𝑖i 𝑖i
+ 𝑣o + 𝑣o
𝑣i 𝑣i
𝑅F
− − − −
𝑖i 𝑖o 𝑖i 𝑖o
+ + + +
𝑣i 𝑣o
− −
𝑣i 𝑣o
𝑅F
𝑅F
− −
Fig. 2.69. External feedback at a single amplifier stage: (a) parallel–parallel feedback, (b) series–
series feedback (and the corresponding two-port circuit models).
Figure 2.69 shows the two obvious feedback configurations with a bipolar junc-
tion transistor. In Figure 2.69a, the feedback resistor 𝑅F is from the collector to the
base, establishing a parallel–parallel feedback. In Figure 2.69b, the feedback resistor
𝑅F (usually called 𝑅E ) leads from the emitter to ground giving rise to a series–series
feedback.
As we will show a series–parallel feedback applied to a common-emitter circuit
coincides with the common-collector circuit. Applying duality will spare us the effort
to prove that the common-base circuit is a common-emitter circuit with parallel–series
feedback.
For this exercise it is important that these two facts mentioned before were under-
stood.
– Just from looking at a two-port feedback configuration its symmetry with regard
to input and output is obvious (see Figure 2.19). Such an arrangement is bidirec-
tional. Therefore, there is a forward-loop gain 𝐴𝐵 (which, usually, is called just
loop gain) and a reverse-loop gain 𝐵𝐴.
– Whenever a valid comparison is made, it is absolutely necessary to make it un-
der “identical” conditions. Therefore, care must be taken that crucial data will be
2.4 Static feedback | 97
a
𝑖oe
b
𝑍L
+
𝑍S 𝑖 𝑍S 𝑖
ie ic
+ + + + +
−
𝑉CC
𝑣S 𝑣S +
𝑖oc −
𝑉CC
− 𝑣ie 𝑣oe − 𝑣ic +
+ + 𝑍L 𝑣oc
𝑉BB −
𝑉BB − −
− − −
Fig. 2.70. Common-emitter circuit (a) and common-collector circuit (b) with signal voltage 𝑣S , source
impedance 𝑍S and load impedance 𝑍L . The transistor’s operating point is set by the power supply
(modelled by VBB and VCC ). Note that the signs of input and output quantities are chosen according
to the two-port convention.
the same, in particular, that the electrical burden is the same in both cases. For-
ward properties are burdened by the load impedance and the impedance of the
feedback network at the output, reverse properties are burdened by the source
impedance and the impedance of the feedback network at the input.
The correct presentation of the output properties of feedback arrangements requires that the re-
verse loop gain is not disregarded.
A series configuration at the input makes the (forward) loop gain to a voltage gain
therefore we must investigate how much the voltage gain 𝑔𝑣c is reduced by the feed-
back 𝑣 oe∗ ∗
𝑣oc 𝑣oe 𝑣ie 𝑔𝑣e 𝑔𝑣e
𝑔𝑣c = = = = = ∗ . (2.100)
𝑣ic 𝑣ie − 𝑣oe 1 − 𝑣𝑣oe ∗
1 − 𝑔𝑣e 1 + 𝑔𝑣e
ie
From the general feedback theory we know that negative feedback reduces the char-
acteristic transfer parameter by the return difference
∗
𝑔𝑣e
𝑔𝑣c =
1 + |𝐴𝐵|
∗
so that the forward loop gain 𝐴𝐵 is obtained as 𝐴𝐵 = 𝑔𝑣e . The asterisk denotes that
the value under actual load conditions is used.
All of the output voltage is fed back, i.e. the maximum possible feedback (with
passive elements) is applied. Any (reasonable) active element would have a rather
high voltage gain so that the voltage gain of a common collector circuit is only
∗
marginally smaller than 1, e.g., with |𝑔𝑣e | = 100 the common collector circuit volt-
age gain 𝑔𝑣c would be 0.99. Common collector/drain circuits are called emitter/source
follower because a voltage gain of about one means that the (small-signal) output volt-
age (at the emitter/source) is identical with (i.e. it follows) the input voltage (at the
base/gate). Using operational amplifiers the corresponding circuit is called voltage
follower (Section 2.5.2.1).
The input impedance is obtained from
𝑣oe
𝑣ie × (1 − )
𝑣ic 𝑣ie − 𝑣oe 𝑣ie
𝑍ic = = = = 𝑍∗ie × (1 − 𝑔𝑣e
∗
) = 𝑍∗ie × (1 + |𝐴𝐵|) . (2.101)
𝑖ic 𝑖ie 𝑖ie
Again, the result conforms with the general equation.
The reverse properties can be investigated by using the output as input and replacing
the (ideal) signal generator by a short-circuit following the recipes of the superposi-
tion theorem (Section 2.1.3.5). Because of the parallel configuration at the output the
reverse loop gain 𝐵𝐴 has the dimension of a current gain.
The parallel configuration pleads for the use of an admittance rather than impe-
dance, therefore, the input admittance 𝑌iB of the feedback two-port should be used
1
𝑌iB = . (2.102)
𝑍S + 𝑍∗ie
Using 𝑣oc = −𝑣oe = 𝑣ic − 𝑣ie = 𝑖ie × 𝑍S − 𝑖ie × 𝑍∗ie and 𝑖oc = 𝑖ie − 𝑖oe with 𝑣ic = 𝑖ic × 𝑍S =
𝑖ie × 𝑍S the output admittance 𝑌oc is obtained as
𝑖oc −𝑖ie − 𝑖oe 1 + 𝑔𝑖e∗
𝑌oc = = = = 𝑌iB∗ × (1 + 𝐵𝐴) (2.103)
𝑣oc 𝑖ic × 𝑍S + 𝑣ie 𝑍S + 𝑍∗𝑖e
2.4 Static feedback | 99
𝑉CC
680
Q1
Q2
+
680
𝑣i 5k6 +
2k7 𝑣o
Fig. 2.71. Two-stage amplifier with a dynamically
− − increased load resistance for the transistor Q1.
Example 2.11 (Bootstrapping with a common collector circuit). In Figure 2.71, a circuit
with two transistor stages Q1 and Q2 is shown. Q1 is a common-emitter circuit, Q2 a
common-collector circuit. The passive load resistance of Q1 (if the 680 Ω resistor is
not connected to the emitter) is 680 Ω in series to 2.7 kΩ, i.e. 3.38 kΩ, parallel to the
input impedance of Q2 of roughly 270 kΩ, i.e. about 3.3 kΩ. The voltage gain 𝑔𝑣 of
Q1 would be low being about proportional to the load impedance. By connecting the
680 Ω resistor to the emitter of Q2 (which acts as an emitter follower with an assumed
voltage gain of 𝑔𝑣 = 0.99) the dynamic impedance of the 680 Ω resistor is 𝑍680 =
680/(1 − 0.99) = 68 kΩ (bootstrap effect, Section 2.4.3.2) so that the load impedance
of Q1 becomes 54 kΩ increasing the voltage gain of Q1 (and of the two-stage amplifier)
by roughly a factor of 16.
By using a voltage follower (Section 2.5.2.1) instead of the emitter follower the volt-
age gain would be much closer to 1 so that the dynamic increase of a resistor due to
the bootstrap effect can be much more dramatic.
Problems
2.69. Common-Base Circuit
(a) Apply the principles of duality on Example 2.10 and the circuit of Figure 2.70b
and get the corresponding equations for the parallel–series feedback, i.e. for the
common-base circuit.
100 | 2 Static linear networks
𝑉CC
12 V
𝑅E
𝑅F
56 kΩ 𝐼C +
𝑅X 𝐼B
+ 𝑣o2
+
150 kΩ 𝑣i 𝑅L 𝑣o1
1.2 kΩ 𝐼E
− −
−
Fig. 2.72. Circuit from Problem 2.70. Fig. 2.73. Circuit from Problem 2.71.
(b) Verify the equations by straight calculations with the currents and voltages given
in Figure 2.70a.
2.70. Analyze the circuit of Figure 2.72 using Thevenin’s theorem and taking advan-
tage of the knowledge that 𝑅E is dynamically enhanced and do the following:
(a) Give the value of 𝑅x when 𝐼E = 5 mA, 𝑔𝑖 = 𝐼C /𝐼B = 99, and 𝑉BE = 0.699 V.
(b) Give the complete (i.e. the input and the output) operating point of the transistor.
2.71. Answer the following questions concerning the circuit of Figure 2.73 for both
outputs designated by 𝑣o1 and 𝑣o2 .
(a) What kind of feedback exists?
(b) Name the characteristic transfer parameter.
(c) Give attributes (low, medium, and high) to the values of the input and the output
impedances.
Amplifiers with external feedback are called operation amplifiers if the characteristic transfer pa-
rameter is de facto independent of the transfer parameter of the active device.
Although all four types of active elements with their four types of transfer proper-
ties would qualify, usually just operational amplifiers (Section 2.3.5.2) are considered.
Those are voltage amplifiers with differential input. As such they have high input im-
pedance and low output impedance. As a high forward closed-loop gain (Section 2.4.1)
is required, the voltage gain should be as high as possible. In many cases, the essen-
tials of a feedback circuit can be understood using an ideal operational amplifier with
input conductance and output resistance zero and an infinite voltage gain.
2.5 Operation amplifiers | 101
𝑍F
𝑖iF
+ −
𝑔𝑣A +
+
𝑣iF
𝑣oF
Fig. 2.74. Inverting operation amplifier with general
− − feedback network.
To simplify life we now assume that the operational amplifier has an input admittance
of 𝑌iA = 0 and an output impedance of 𝑍oA = 0. Besides, the voltage gain 𝑔𝑣A should
be as high as possible. The two following simple facts make an analysis of this circuit
a straightforward exercise:
– From (2.92) we know that the input impedance of this circuit is 𝑍iF = 𝑍F /(1−𝑔𝑣A ),
i.e. with −𝑔𝑣A ≫ 1 it can be very low establishing a virtual ground (Section 2.4.3.1).
– Because of 𝑌iA = 0 (or very small when compared to 𝑌iF ) all of the input current
𝑖iF flows through the feedback network so that the output voltage 𝑣oF is given by
𝑣oF = −𝑖iF × 𝑍F . As 𝑖iF flows into the feedback network the voltage at the other
end of the network (at the amplifier output) must be negative, in agreement with
the inverting mode of the amplifier.
This answer is in perfect agreement with the findings of the general feedback theory
where we found that the transfer property of a system with negative feedback becomes
independent of that of the active element if a very high (closed-) loop gain is present
(2.72). Thus, an inverting operation amplifier is an amplifier with transimpedance −𝑍F
as characteristic transfer property.
Figure 2.75 depicts the most common use of an inverting operation amplifier. The
resistance 𝑅S of a real voltage source (e.g., obtained by Thevenin’s theorem) is cas-
caded with an operation amplifier using a resistor 𝑅F in a parallel–parallel feedback
configuration. Because of the node with the virtual ground, the input current is 𝑖iF =
102 | 2 Static linear networks
𝑅F
𝑅S 𝑖
iF
−
Circuit elements that lie outside the feedback loop have no direct influence on the feedback action.
The correct interpretation of the inverting operation amplifier within the framework
of feedback circuits is as follows: Because of the virtual ground the resistor 𝑅S acts as
a voltage-to-current converter with the transadmittance 1/𝑅S whereas the cascaded
operation amplifier works as a current-to-voltage converter with the transimpedance
−𝑅F so that the combined transfer property, voltage gain from source to output, be-
comes −𝑅F /𝑅S .
Problem
2.73. An operational amplifier has an input impedance 𝑍iA = 10 kΩ and an open-
loop voltage gain of 𝑔𝑣A = −200. The output impedance is 𝑍oA = 0 Ω. By connecting
the output to the inverting input with a resistor of 10 kΩ feedback is established.
(a) What kind of feedback is this?
(b) How much does the voltage gain 𝑔𝑣F of the feedback circuit differ from 𝑔𝑣A the
voltage gain of the amplifier?
Example 2.12 (Ideal vs. real operational amplifier). The circuit of an inverting oper-
ation amplifier (Figure 2.76) uses a very unsophisticated operational amplifier. The
2.5 Operation amplifiers | 103
𝑅F
𝑍S 𝑖i 𝑖o
−
+ 𝑔𝑣 +
𝑣iF + 𝑣oF 𝑍L Fig. 2.76. Parallel–
A parallel feedback
− − applied to a differ-
ential amplifier.
Table 2.7. Hybrid parameters of the two two-ports of the feedback loop in Figure 2.76.
Two-port A Two-port B
parameter values of this operational amplifier and of the feedback two-port are given
in Table 2.7. The impedance 𝑍S of the source is 1 kΩ and that of the load 𝑍L is 1 kΩ,
as well. With these numbers we get
– the forward (closed-) loop gain 𝐴𝐵 = 𝑖oB /𝑖iA = −1950.7,
– the reverse (closed-) loop gain 𝐵𝐴 = 𝑖oA /𝑖iB = −192 514,
– the input admittance 𝑌iF = 𝑌iA × (1 − 𝐴𝐵) = 1/10.248 S,
– the output admittance 𝑌oF = 𝑌iB × (1 − 𝐵𝐴) = 1/0.265 S.
The loaded voltage gain 𝑔𝑣∗ is smaller than the open-loop gain 𝑔𝑣 = −5000 because of
the voltage division between 𝑍oA , and 𝑍L shunted by the dynamic impedance of 𝑅F
as seen from the output, so that 𝑔𝑣∗ = −4876.
Finally, the characteristic transfer parameter, the transimpedance 𝑟mF = 𝑟mA /(1−
𝐴𝐵) = 𝑍iA × 𝑔𝑣∗ /(1 − 𝐴𝐵) = −49.963 V/mA. In the ideal case, it is expected to be the
1
inverse of the transadmittance of the feedback two-port 𝐴 F = 1/𝐵 = − 1/(50 kΩ) =
−50 V/mA.
Table 2.8 compares the ideal values with the values obtained with this unsophisti-
cated operational amplifier. In particular, the characteristic transfer parameters agree
rather nicely. Up-to-date operational amplifiers resemble their ideal counterpart much
closer. Both the input impedance and the open-loop voltage gain are about two orders
of magnitude higher. Thus operation amplifiers using these devices will approach the
ideal behavior even more closely.
104 | 2 Static linear networks
Table 2.8. Comparison of the ideal values with realized values of the amplifiers from Example 2.12.
Operational amplifier
1/𝑍iA 0 1/(20 kΩ)
𝑍oA 0 25 Ω
−1/𝑔𝑣A 0 1/5000
Inverting operation amplifier
𝑍iF 0 10.25 Ω
𝑍oF 0 0.26 Ω
∗
−1/𝑔𝑣F 0 1/4876
−1/𝐴𝐵 0 1/1951
−1/𝐵𝐴 0 1/192 514
𝑟mF −50 kΩ −49.96 kΩ
If the currents 𝑖𝑘 come from voltage sources 𝑣S𝑘 with equal resistances 𝑅S , then one
gets
𝑛 𝑛
1
∑ 𝑖𝑘 = × ∑ 𝑣S𝑘 (2.107)
𝑘=1
𝑅S 𝑘=1
and
𝑛
𝑅F
𝑣oF = − × ∑ 𝑣S𝑘 . (2.108)
𝑅S 𝑘=1
With 𝑅F = 𝑅S this becomes
𝑛
𝑣oF = − ∑ 𝑣S𝑘 . (2.109)
𝑘=1
𝑅 𝑅
−
𝑅 +
+
+
𝑣S1 +
𝑅
− 𝑣S2 +
− 𝑣S3 + 𝑣oF
− 𝑣S4
−
−
In the forward direction, the current 𝐼D of a diode is, over several decades, given
by
𝐼D = 𝑓1 × exp(𝑓2 × 𝑉D ) . (2.110)
𝑅F
𝑅S
−
+
+
𝑣S
𝑣oF
+
𝑣B
−
Fig. 2.78. A voltage limiter accomplished by
− active voltage clipping.
nus the forward voltage of the diode. It cannot be less, i.e. the output voltage is limited
to that voltage.
By raising the voltage at the noninverting input (e.g., by means of a potentiome-
ter as shown in Figure 2.78) the voltage at the inverting input is raised by the same
amount and consequently the output voltage as well. This allows setting the voltage
limit according to the need.
Operational (or operative) amplifiers, i.e. differential amplifiers with high input impe-
dance and low output impedance, and high gain are also used for operation amplifiers
with series–parallel feedback. Such an operation amplifier is shown in Figure 2.79.
𝑅S
+
+ +
−
𝑅2
𝑣S 𝑣iF
𝑣oF
𝑅1
Fig. 2.79. Noninverting oper-
ation amplifier with standard
−
− resistive feedback network.
2.5 Operation amplifiers | 107
The following two simple facts make an analysis of this circuit a straightforward
exercise.
– From (2.87) we know that the input impedance 𝑍iF of this circuit is 𝑍iA × (1 −
𝐴𝐵). With |𝑔𝑣A | ≫ 1 the (negative) forward loop gain which is also a voltage gain
due to the series configuration at the input, is very large, too, making the input
impedance 𝑍iF very large (1 MΩ to 1 TΩ) even for nonideal input impedances 𝑍iA .
– Because of the high gain (|𝑔𝑣A | ≫ 1) the differential input voltage at the opera-
tional amplifier is essentially zero.
With this information (negligible input admittance and negligible differential input
voltage) we get from the voltage division (in Figure 2.79) 𝑣iF = 𝑣oF × 𝑅1 /(𝑅2 + 𝑅1 ) the
voltage gain 𝑔𝑣F (the characteristic transfer parameter)
𝑣oF 𝑅
𝑔𝑣F = =1+ 2 . (2.111)
𝑣iF 𝑅1
Again the characteristic transfer property is solely determined by the property
of the feedback circuit (voltage divider) as required for operation amplifiers (Sec-
tion 2.4.1).
1
𝑔𝑣F = . (2.112)
1 + 𝑔1∗
𝑣A
𝑅S
+
+ +
−
𝑣S 𝑣iF 𝑖iB
𝑣oF
This difference is due to the (very small) differential input voltage which is the
∗
output voltage divided by the voltage gain 𝑔𝑣A of the amplifier under actual burden as
indicated by the asterisk.
However, there is another factor to be considered. As mentioned before (Sec-
tion 2.3.5.1) the suppression of the common-mode signal in amplifiers with differential
input is not perfect. As 𝑣iCM ≈ 𝑣iF and the contribution of the common mode signal
to the output signal is suppressed by the common mode rejection ratio 𝐶𝑀𝑅𝑅, 𝑔𝑣F
becomes
1 1
𝑔𝑣F = 1
× (1 + ). (2.113)
1 + 𝑔∗ 𝐶𝑀𝑅𝑅
𝑣A
Is an amplifier with a voltage gain of (less than) 1 an amplifier at all? The answer
is yes, if the power gain 𝑔𝑝 > 1. From (2.29) it is clear that with 𝑔𝑣 = 1, 𝑍iF > 𝑍L is
necessary to achieve a power gain 𝑔𝑝 > 1.
Comparing equation (2.112) with the general equation (2.69) using the (forward)
∗
loop gain results in a perfect agreement when equating 𝐴𝐵 with 𝑔𝑣A :
1 1
𝑔𝑣F = 1
= (2.114)
1 + 𝑔∗ 1 𝐴𝐵
𝑣A ∗
+ ∗
𝑔𝑣A 𝑔𝑣A
Thus, the forward closed-loop gain is a voltage gain as required by the serial configu-
∗
ration at the input 𝐴𝐵 = 𝑔𝑣A .
Example 2.13 (Importance of reverse loop gain). The calculation of the output impe-
dance of a voltage follower (Figure 2.80) requires the knowledge of the reverse loop
gain. Because of the parallel configuration at the output the reverse-loop gain is a cur-
rent gain, 𝐵𝐴 = 𝑖oA /𝑖iB . The input current 𝑖iB into the feedback network flows through
the input impedance 𝑍iA of the amplifier and the impedance 𝑅S of the source to ground
𝑍iB = 𝑍iA + 𝑅S . (2.115)
The reverse closed-loop gain 𝐵𝐴 equals
𝑖oF 𝑖 ∗
𝐵𝐴 = = oA = −𝑔𝑖A . (2.116)
𝑖iB −𝑖iA
Thus, the output admittance 𝑌oF is given as
∗ 1 ∗
𝑌oF = 𝑌iB × (1 − 𝐵𝐴) = 𝑌iB × (1 + 𝑔𝑖A )= × (1 + 𝑔𝑖A ). (2.117)
𝑍iA + 𝑅S
In a simplistic view, disregarding the reverse loop gain, one would expect
∗
𝑌oF = 𝑌oA × (1 + 𝑔𝑣A ) (2.118)
which is wrong because
1
𝑌oA ≠ , and (2.119)
𝑍iA + 𝑅S
∗ ∗
the forward loop gain 𝑔𝑣A differs from the reverse loop gain 𝑔𝑖A , e.g., Table 2.8.
2.5 Operation amplifiers | 109
𝑉+
+ 𝑅par
OUT
−
𝑉+
𝑅L
Fig. 2.82. Voltage stabilization by means of a Zener diode in se-
ries to a resistor 𝑅 providing shunt regulation.
Power considerations might require that a class B or B’ amplifier be used, i.e. the
operating point is at the edge of the dynamic range, either allowing the current to
increase, i.e. to deliver current as done by series regulators, or to decrease, i.e. to sink
current as done by shunt regulators. A simple demonstration of shunt regulation is
given in Figure 2.82. There a Zener diode that maintains its voltage will sink current
fed into the output.
If you need both delivering current and sinking current, the output stage of the
amplifier should be a push-pull device with complementary semiconductor power
components, with class B operating points. The N-type would be used for the series
regulator, the P-type for the shunt regulator.
Commercial power supplies, if at all linear, are very sophisticated having several
additional features: Possibly a preregulator that minimizes the power dissipated in the
output stage of the amplifier by maintaining a low operating voltage of that stage, an
output characteristic with constant voltage (cv) and constant current (cc; making the
device safe against short-circuits), and provisions against thermal, current, and power
overload. The cc mode does not only allow limiting the output current to a preselected
value but makes it possible to use voltage power supplies in parallel in contradiction
to the behavior of ideal voltage sources. In that case the lowest voltage of the supplies
will be provided in the cv mode with the other supplies in the cc mode supplementing
additional current needs.
A linear regulated power supply excels the competitors with regard to regulating
properties and fast response to changes of line or/and load. Its recovery time after
such changes is smaller than in supplies using other techniques. Its circuit simplicity
provides a very effective solution coupled with high reliability, sufficient power in most
applications with stable regulation and little noise.
𝑅F
+ + 𝑖i
−
+
− +
𝑣i +
𝑣oF 𝑣o
− − −
Fig. 2.83a. Basic precision half-wave rectifier. Fig. 2.83b. Improved precision half-wave recti-
fier.
Feedback effectively removes the forward voltage of diodes making it possible that
even small (positive, in this circuit) signals can pass the diode.
Problem
2.74. The precision rectifier of Figure 2.83a will have zero output voltage 𝑣o for nega-
tive input voltage 𝑣i . Explain why it is possible that under these conditions the differ-
ential input voltage is not necessarily small!
𝑅F
𝑅S1
−
+
+
𝑅S2
𝑣oF
𝑣S1 𝑣S2 𝑅2
Fig. 2.84. Difference operation
− amplifier.
However, there are two resistors which are not present in Figure 2.75. They are in
parallel to each other and in series to the input impedance. Because of the required
high input impedance of the operational amplifier these resistors do not really affect
the small-signal operation of the inverting amplifier.
Shortening voltage source 1 makes the circuit to a cascade of a voltage divider
and a noninverting operation amplifier. The voltage 𝑣S2 is first divided to the fraction
𝑅2 /(𝑅S2 + 𝑅2) which is then amplified by the noninverting operation amplifier with
(𝑅F + 𝑅S1 )/𝑅S1 . Under the condition that
𝑅F 𝑅
= 2 (2.121)
𝑅S1 𝑅S2
the combined output voltage 𝑣oF is given by
𝑅F
𝑣oF = (𝑣S2 − 𝑣S1 ) × , (2.122)
𝑅S1
it is proportional to the difference of the two source voltages. For this reason this circuit
is called difference amplifier (or ambiguously differential amplifier).
There are two main contributors that influence the quality of the resulting output
voltage (see Section 2.3.5.1)
– common mode contributions, and
– asymmetry (offset) contributions.
The common mode input signal 𝑣iCM = (𝑣i1 + 𝑣i2 )/2 adds to the output signal an
(amplitude dependent) component 𝑣oCM = 𝑣iCM × 𝑔𝑣∗ /𝐶𝑀𝑅𝑅, and because of the
unavoidable asymmetry of the two input stages an offset voltage (and current) at the
input is needed to balance the output to zero if the input voltages are zero. The latter
requires that the impedances of the circuit as seen from the two inputs are equal.
Figure 2.85 shows a network for subtracting signals that avoids common mode
problems. Subtracting is reached by summing signal 1 with the inverted signal 2. Both
amplifiers have virtual ground (Section 2.4.3.1) so that common mode problems do
not occur. This arrangement does not only require an additional amplifier but also the
asymmetry of the two input channels limits its application.
2.5 Operation amplifiers | 113
𝑅
𝑣𝑖1 𝑅
𝑅
−
+
𝑅
−
𝑣𝑖2
+ 𝑅
All equations containing the (closed-) loop gain are valid for positive feedback as
well, considering that now the loop gain is positive. Therefore, the return difference is
1 − |𝐴𝐵| or 1 − |𝐵𝐴|, respectively. When the loop gain is positive but less than 1 the
return difference becomes < 1 and the properties of a two-port are affected in a way
just opposite to negative feedback, i.e. all advantages of negative feedback for ampli-
fiers are reversed to disadvantages. Only the increase of the transfer parameter can be
beneficial. In special cases it pays to increase the gain by a local positive feedback to
increase the overall loop gain of a negative feedback loop that extends over several
stages. The main application of positive feedback is in oscillator design with which
we will deal in Chapter 4. In tailoring dynamic negative impedances, stable positive
feedback is also invaluable.
Problem
2.75. Under which condition is an amplifier with positive feedback stable?
𝑅1
𝑖iF
−
+ 𝑣iA 𝑣oA
+
𝑅S
𝑅2 𝑖oNIC
𝑣iF +
𝑣oNIC 𝑅L
Fig. 2.86. Negative impedance converter based
on negative and positive feedback with parallel
− − configurations at the output of the amplifier.
and in analogy
𝑣oNIC 𝑣 𝑅
𝑍oNIC = = iF = −𝑅S × 2 . (2.124)
𝑖oNIC 𝑖oNIC 𝑅1
Alternatively, the output impedance can be obtained by calculating the dynamic im-
pedance of 𝑅2 (Section 2.4.3.2)
𝑅2 𝑅2 𝑅
𝑍oNIC = = = −𝑅S × 2 . (2.125)
1 − 𝑔𝑣F 𝑅1 + 𝑅S 𝑅1
1−
𝑅S
To have the operation amplifier in a stable condition the signal provided by positive feedback to
the noninverting input must at no moment be stronger than that of the negative feedback acting
on the inverting input.
The fractions of 𝑣oF that are fed back can be obtained by voltage division of 𝑣oF (= 𝑣oA )
𝑅S 𝑅L
𝑣iF = 𝑣oA × ≥ 𝑣oNIC = 𝑣oA × . (2.126)
𝑅1 + 𝑅S 𝑅2 + 𝑅L
With 𝑅1 = 𝑅2 the stability condition degenerates to 𝑅S ≥ 𝑅L which means that the
system is stable with 1/𝑅S = 0 (open-circuit) or with 𝑅L = 0 (short-circuit). In other
words, the input i-v-characteristic has an S-shape, the output i-v-characteristic an N-
shape.
Problems
2.76. Show that the load line of a short-circuit has only one intersection with an N-
shaped i-v-characteristic.
2.77. Show that the load line of an open-circuit has only one intersection with an S-
shaped i-v-characteristic.
2.5 Operation amplifiers | 115
As discussed before, the main purpose of electronics is to transfer (signal) power into
a load. Usually, the circuit transferring the power is symbolized by a two-port with two
input variables and two output variables. Electric power is best determined through
measurements of voltage or/and current. Therefore, there are four options to describe
the transfer property of a two-port (Section 2.2.2). The usual favorite is the ratio be-
tween output voltage 𝑣o and input voltage 𝑣i , called voltage gain 𝑔𝑣 . The following
discussion will be based on the voltage gain as transfer property. The other three be-
have in a dual way.
Voltage gain depends on the operating point. Therefore, it should be presented
by a characteristic rather than a number. Besides each device has its own individ-
ual characteristic. Therefore, it is necessary to work with typical characteristics. As a
byproduct this requires that only such circuits should be designed and built that are
not sensitive to the actual characteristic.
116 | 2 Static linear networks
Actual voltage gain 𝑔𝑣∗ which accounts for the loading of the circuit
Modern circuit design is aimed at becoming independent of the individual characteristics of the
active electronic components.
Thus, the first step is to find the typical voltage transfer characteristic and to linearize
it so that it can be expressed by a single parameter, the voltage gain 𝑔𝑣 as found in data
sheets. As this parameter should be usable for all loads, it is given under open-circuit
condition and is called open-loop gain. Thus, it is identical with the linear two-port
parameter 𝑔f (Section 2.2.2.1).
However, in an actual circuit the voltage gain with the actual burden is of partic-
ular interest, symbolized in this book by 𝑔𝑣∗ .
Applying series–parallel feedback introduces a new type of voltage gain 𝑣oF /𝑣iF ,
the voltage gain of the amplifier with feedback which, in the case of negative feedback,
is primarily determined by the feedback circuit and not by the transfer property 𝑔𝑣
of the active two-port. Furthermore, the voltage gain 𝑔𝑣∗ is part of the forward loop
gain 𝐴𝐵 of this feedback circuit. This hierarchy is sketched in Figure 2.87.
In addition there are two “odd” voltage gains to consider.
– It is generally accepted (bad) practice to make the inverting operation amplifier
with transimpedance as its characteristic transfer property to a voltage amplifier
by including the source impedance 𝑅S arriving at a source-voltage gain which is
defined as 𝑔𝑣S = 𝑣oF /𝑣S .
– Then there is the noise gain. Not quite unexpectedly, the feedback circuitry may
act differently on amplifier noise which originates from inside the circuit than on
a signal from outside. An example is given in Figure 2.88.
2.5 Operation amplifiers | 117
𝑅F
𝑅S 𝑖
iF
−
+ +
𝑣S
𝑣oF 𝑅L
𝑣in
Fig. 2.88. Negative parallel–parallel
− feedback with internal noise.
For a source signal this is a transimpedance amplifier with a voltage gain from source
to output of −𝑅F /𝑅S . Noise in the amplifier is usually expressed by a noise voltage
source at the input of said amplifier. For a noise signal the signal source 𝑣S is equiv-
alent to a short-circuit (due to the superposition theorem) so that for the noise signal
𝑣in a serial-parallel feedback configuration is in force with a noise voltage gain 𝑔𝑣n of
𝑅F
𝑔𝑣n = 1 + . (2.127)
𝑅S
Problems
2.78. Why is it not unexpected that a feedback circuit may behave differently to ex-
ternal and internal signals?
2.79. A change in the biasing (of the operating point) acts very similar to noise gener-
ated in the active element. What kind of feedback configuration is responsible for the
counter reaction of the circuit on such changes?
A current mode operational amplifier is basically an active device with low input im-
pedance, high current gain, and high output impedance. None of the three-terminal
components provide these characteristics by themselves. As a minimum, a two-stage
structure is required, consisting of a common base input stage providing a low in-
put impedance and a common-emitter output stage providing high current gain and a
high output impedance. Thus, a transimpedance input stage is followed by a transad-
mittance output stage. The current gain is the product of the transimpedance and the
transadmittance.
If a differential output is required, a differential long-tailed pair is an obvious
choice for the output stage. One input to the long-tailed pair is used for the signal
the other input is connected to a constant bias voltage source.
Problem
2.80. Name one reason why voltage amplifiers are preferred over current amplifiers.
118 | 2 Static linear networks
𝑅S
+
𝑔𝑣A × 𝑣iA
𝑍oA 𝑖o 𝑖L
− +
−
𝑅L
+
𝑣S
−
𝑖L
𝑅1
Fig. 2.89. Active current source based on an operational amplifier. The load current 𝑖L is a high im-
pedance output current.
𝑣iA = 𝑖L × 𝑅1 ,
yielding
𝑣L
𝑍oL = = 𝑍oA + 𝑅1 × (1 + 𝑔𝑣A ) . (2.128)
𝑖L
2.5 Operation amplifiers | 119
𝑖o
+
A 𝑍L
−
𝑅S
+
𝑅E
+ −
𝑣ref Fig. 2.90. An operational amplifier
−
aided current source.
Thus, the impedance of this current source is effectively the dynamically increased im-
pedance of 𝑅1 (Section 2.4.3.2). The disadvantage of this circuit is that the current is not
delivered toward ground but against a floating voltage. Note that the load resistance
was omitted in this calculation just the current 𝑖L through 𝑅1 was used. Consequently,
the value of the load resistance does not matter, i.e. the current is independent of the
value of the load resistor, just as required of a current source. With a high voltage gain
𝑔𝑣A , 𝑍oL could well be many megohms.
Problem
2.81. What is the disadvantage of the circuit in Figure 2.89?
Changes of 𝑣iA and 𝛽 are strongly suppressed so that a stable output current is guar-
anteed.
The output impedance is the output impedance of the common-emitter circuit in
series to the (dynamic) impedance of 𝑅E . The effective source impedance (between the
base and the emitter of the transistor) is obtained by
𝑍S = 𝑍oA + 𝑅E × (1 + 𝑔𝑣A ), (2.130)
120 | 2 Static linear networks
𝑖i 𝑖o
𝑅S 𝑍L
+ +
𝑣S
− −
i.e. it is very high. Therefore, the equation for the output impedance of common-
emitter circuits with very high source impedances applies
𝑟c
𝑍oe = . (2.131)
𝛽+1
Together with the impedance of 𝑅E the total output impedance is obtained as
𝑟c 𝛽+1
𝑍o ≈ + 𝑅E × (2.132)
𝛽+1 𝛽
The dynamic voltage range of the mirror is called the compliance range.
The basic circuit of a current mirror is shown in Figure 2.91. A voltage applied to the
base-emitter junction of Q1 is at the same time the input voltage of Q2 with its collector
current as output quantity. Q2 acts as an exponential voltage-to-current converter. The
circuit that includes Q1 can be viewed at as a negative feedback circuit. This interpre-
tation looks rather far-fetched so that it deserves some scrutiny. In Figure 2.92 the first
stage of the current mirror is put into the two two-ports of a parallel–parallel feedback
(Section 2.5.1). The situation is extremely unusual as the active element is in the feed-
back two-port and the wire connecting collector with base rests in the two-port A (see
Figure 2.92). As primarily the (closed) loop gain is essential for the feedback action it
does not matter in which two-port the active element is situated so that our findings
on feedback (Section 2.4.1) are valid without question. At high enough (closed-) loop
gain the transfer of the feedback circuit is (entirely) the inverse of that of the feedback
2.5 Operation amplifiers | 121
𝑅S 𝑖
i
+
network. Therefore, the first stage has the inverse transfer property of a transistor, i.e.
its transfer function is a logarithmic current-to-voltage converter. Cascading a logarith-
mic current-to-voltage converter (Q1 ) with an exponential voltage-to-current converter
(Q2 ) results in a current amplifier with gain 1 if the transfer functions of Q1 and Q2 are
matched. Exactly speaking it does not matter which shape the transfer function has.
If the two elements have equal voltage-to-current transfer functions, the circuit will
inverse the transfer function of the first stage so that the product of both will result in
a current gain 𝑔𝑖 = 1. Considering the geometric asymmetry of the two stages one gets
for equal current gains 𝛽 of the transistors
𝑖c2 𝛽2 1
𝑔𝑖 = = = ≈1. (2.133)
𝑖c1 + 2 × 𝑖b 𝛽1 + 2 2
1+
𝛽
The compliance voltage, the lowest output voltage that results in correct mirror
behavior, is at 𝑉o = 𝑉BE because 𝑉CB2 ≥ 0 V keeps Q2 active. The output impedance
𝑍o is just the output impedance 𝑍oe of Q2 which is a common-emitter circuit.
Problem
2.82. When the simple current mirror is explained by negative feedback action, the
active element is in the feedback two-port. Why do all the equations concerning feed-
back also apply for such cases?
Example 2.15 (Current mirror aided by an operational amplifier). Although, the cir-
cuit in Figure 2.93 resembles that of a simple current mirror, insofar, as the first stage
is rather similar, its behavior is distinctively different. Again, there is a two-stage ar-
rangement with the first stage performing a current-to-voltage conversion and a sec-
ond stage a voltage-to-current conversion. However, in this case there is no need to
match the characteristics of the active elements because the conversions are done by
the resistors 𝑅E1 and 𝑅E2 , respectively. Let us assume that both resistors have the same
value 𝑅E . Assuming high input resistance of the operational amplifier (𝑍iA ≫ 𝑅E )
then all of the input current 𝑖i flows through 𝑅E1 resulting in an input voltage 𝑣+ at
122 | 2 Static linear networks
𝑖i 𝑖o
𝑅S Q1 Q2 𝑍L
+ −
+ + + +
− − Fig. 2.93. Current mirror with
𝑅E1 𝑣+ 𝑣− 𝑅E2 operational amplifier feedback
to increase output resistance
− −
according to Example 2.14.
the operational amplifier of 𝑣+ = 𝑖i × 𝑅E1 . The input voltage 𝑣− at the inverting input
is 𝑣− = 𝑣+ − 𝑣iA . From the output voltage 𝑣oA of the amplifier one gets its input volt-
∗
age 𝑣iA = 𝑣oA /𝑔𝑣A . As the emitter follower Q2 has a voltage gain of practically 1, its
(small-signal) output voltage equals 𝑣oA which at the same time is 𝑣− . Therefore, one
gets
𝑣oA 𝑣
𝑣− = 𝑣+ − 𝑣iA = 𝑖i × 𝑅E1 − ∗
= 𝑖i × 𝑅E1 − ∗− , (2.134)
𝑔𝑣A 𝑔𝑣A
and
1 1
𝑣− × (1 + ∗
) = 𝑖i × 𝑅E1 = (𝑖o + 𝑖B2 ) × 𝑅E1 × (1 + ∗ )
𝑔𝑣A 𝑔𝑣A
1 1
= 𝑖o × (1 + ) × 𝑅E1 × (1 + ∗ ) (2.135)
𝛽2 𝑔𝑣A
yielding
𝑖o 1
𝑔𝑖 = = ≈1. (2.136)
𝑖i 1 1
(1 + 𝛽2
) × (1 + ∗ )
𝑔𝑣A
This answer is closer to one than that of the circuit in Figure 2.91. Actually these two
circuits have little in common except for the input stage and the split of the current
gain into a current-to-voltage conversion and a voltage-to-current conversion.
The second stage with the operational amplifier is just the operational amplifier
aided current source of the previous section. There, we found an increase of the out-
put impedance 𝑍o2 at the collector of Q2 . This increased output impedance is another
benefit of this circuit.
Problem
2.83. Name the two benefits of the operational amplifier aided current mirror of Fig-
ure 2.93.
angle (measured in radians) relative to some reference angle. The presentation of both
the frequency 𝑓 by the angular frequency 𝜔 (measured in s−1 ) which is the frequency
𝑓 = 1/𝑇 (measured in Hz) times 2𝜋 and the phase shift 𝜑 in radians is a requirement
because the argument of the sine function must be proportional to 2𝜋 (i.e, for practical
purposes a fraction of it).
Other common steady-state signals are the square wave, the triangular wave, and
the saw-tooth signal.
Problems
3.1. Give the mathematical presentation for a bipolar saw-tooth signal with a period 𝑇
oscillating between +1 V and −1 V.
3.2. Give the mathematical presentation for a bipolar square-wave signal with a pe-
riod 𝑇 oscillating between +1 V and −1 V.
3.3. Give the mathematical presentation for a bipolar triangular wave signal with a
period 𝑇 oscillating between +1 V and −1 V.
3.4. Explain, why one can charge a capacitor with a DC voltage supply even if the
admittance of a capacitor equals zero at 𝜔 = 0.
3.5. Name two signal properties that are in common to a constant voltage and a con-
stant frequency.
Reversing Kirchhoff’s laws we arrive at the fact that in a linear network any current
through a node (or voltage between two points) can be composed of partial currents
(voltages).
In Figure 3.1 one simple example is given where the superposition of a unipolar
voltage square-wave signal with a constant voltage level is demonstrated. This figure
8
+
2V
6
0V
v
𝑣 4
(V)
+
5V − 2
−
0
0 2 4 6 8 10 12
time (s)
Fig. 3.1a. Superposition of a constant voltage Fig. 3.1b. Combined output of the two voltage
with a unipolar square-wave voltage signal. sources
3.1 Decomposition of signals | 125
harmonic components
frequency spectrum (time domain)
(frequency domain)
7
5 𝑇
3 𝑇
1 𝑇 frequency
𝑇
𝑇
2 signal plane
time 𝑇 (time domain)
Fig. 3.2a. Decomposition of the signal from (3.1) into harmonic functions (here sine-functions). The
front panel shows the signal in the time domain. The individual components are shown behind this
panel.
1
Volts
−1
1 time 𝑇 3
2𝑇 2𝑇
1
Volts
−1
1 time 𝑇 3
2𝑇 2𝑇
Volts
0
−1
may be read in either direction: Adding the output of two voltage sources yields the
combined output as shown. The combined output voltage signal can be decomposed
into the signals of two appropriate voltage sources. In this case, the decomposition is
not unique because the square wave may be assumed to be bipolar, or unipolar in ei-
ther direction, each requiring a different DC voltage to produce the identical combined
signal.
Decomposition into a DC voltage and a bipolar square-wave signal is most con-
venient for further use. It resembles the decomposition of the electrical variables of
the momentary operating point into a quiescent value and a small-signal value (Sec-
tions 2.1.3.5, 2.3.3, and 2.3.3).
More to the point is the superposition of steady-state sinusoidal signals, e.g., the
following sum
𝑣(𝑡) = 1.27 sin(𝜔0 𝑡) + 0.42 sin(3𝜔0 𝑡) + 0.25 sin(5𝜔0 𝑡) + 0.18 sin(7𝜔0 𝑡)[V] . (3.1)
As the frequencies are integer multiples of the basic frequency 𝜔0 one expects a steady-
state signal that is repetitive with a period of 𝑇 = 2𝜋/𝜔0 .
Figure 3.2d shows the four amplitudes of the frequency components in depen-
dence of the angular frequency. This figure is the presentation of the combined signal
in the frequency domain. Figure 3.2b shows the same signal in the time domain, i.e.,
the result of the addition of the above frequency components.
Reversing above procedure, namely decomposing periodic steady-state signals,
i.e., periodic signals with properties that are unchanging in time into their harmonic
components is a powerful tool in electronics. The Fourier transform (Section 3.1.1) de-
composes periodic time-signals into their conjugate frequency components. Spectrum
analyzers (Section 6.8.3) using a Fourier transform algorithm would perform such a
decomposition of electrical signals.
Problem
3.6. Which physical quantity is conjugate to time?
3.1 Decomposition of signals | 127
In the following sections the relevant equations needed for Fourier analysis are collected. These
sections may be skipped without endangering the understanding of any important topic of this
book.
There are various ways of dealing with signal decomposition. The decomposition of
signals into harmonic functions (sine- and cosine-functions) is called Fourier analy-
sis. In the following we will explain the basics of Fourier series representation of peri-
odic signals and will then give an introduction into the continuous Fourier transform
which is a generalization of the Fourier series method. Finally we will give a short
overview about how the Fourier transform method assists in solving systems of differ-
ential equations.
There exist several different notations for the Fourier series and for the Fourier
transform as well. We will present the three standard forms of Fourier series represen-
tation (sine- and cosine-sums, cosine-sums with phase shifts and exponential form)
using angular frequencies (𝜔 = 2𝜋 × 𝑓). Normalization factors in the series coeffi-
cients will, therefore, be multiples of the fraction 1/𝑇. If regular frequencies (𝑓) were
used instead of angular frequencies, the normalization factors would be multiples of
1/2𝜋 instead of 1/𝑇.
Problem
3.7. Fourier analysis decomposes a periodic time signal into a finite number of com-
ponents. What are these components like?
with
2𝜋
𝜔0 = 2𝜋 × 𝑓0 = . (3.3)
𝑇
128 | 3 Dynamic behavior of networks (signal conditioning)
𝑡 +𝑇
𝑡 +𝑇
1
signal strength
−1
−𝑇 − 𝑇2 0 𝑇
2
𝑇
time
Fig. 3.3a. Graph of the function cos(1𝜔0 𝑡)𝑐𝑜𝑠(0𝜔0 𝑡), i.e., 𝑐𝑜𝑠(𝜔0 𝑡). The highlighted area between the
time axis and the function graph equals the integral from 𝑡 = −𝑇/2 to 𝑡 + 𝑇 = 𝑇/2. Observe that
the negative portions and the positive areas are equal. Thus the integral is zero.
3.1 Decomposition of signals | 129
1
signal strength
−1
−𝑇 − 𝑇2 0 𝑇
2
𝑇
time
1
signal strength
−1
−𝑇 − 𝑇2 0 𝑇
2
𝑇
time
Fig. 3.3c. Graph of the function cos(3𝜔0 𝑡) cos(3𝜔0 𝑡). The highlighted area between 𝑡-axis and the
function’s graph equals 𝑇/2.
1
signal strength
−1
−𝑇 − 𝑇2 0 𝑇
2
𝑇
time
𝑡 +𝑇 𝑡 +𝑇
{
∞
{ }
}
= ∑ {𝑎𝑘 ∫ cos(𝑘𝜔0 𝑡) sin(𝑙𝜔0 𝑡)d𝑡 + 𝑏𝑘 ∫ sin(𝑘𝜔0 𝑡) sin(𝑙𝜔0 𝑡)d𝑡}
𝑘=1
{ }
𝑡
{ 𝑡 }
𝑇
= 𝑏𝑙 × . (3.7)
2
Multiplying (3.2) with cos(𝑚𝜔0 𝑡) and subsequent integration yields
𝑡 +𝑇
The limit 𝑡 can be chosen arbitrarily and should be chosen so that integration of above
terms is easiest. For signals with the property 𝑠(𝑡) = 𝑠(−𝑡) all coefficients 𝑏𝑘 are zero
and for signals with 𝑠(𝑡) = −𝑠(−𝑡) all 𝑎𝑘 are zero.
There are two components for each frequency 𝑘𝜔0 : a sine-component and a
cosine-component. All linear combinations of these components 𝑎𝑘 cos(𝑘𝜔0 𝑡) +
𝑏𝑘 sin(𝑘𝜔0 𝑡) can be rewritten by means of the trigonometric angle-sum identity
so that
𝑎𝑘 cos(𝑘𝜔0 𝑡) + 𝑏𝑘 sin(𝑘𝜔0 𝑡) = 𝐴 𝑘 cos(𝑘𝜔0 𝑡 − 𝜑𝑘 ) , (3.13)
3.1 Decomposition of signals | 131
𝑎𝑘 = 𝐴 𝑘 cos(𝜑𝑘 ) (3.14)
𝑏𝑘 = 𝐴 𝑘 sin(𝜑𝑘 ) (3.15)
Thus, the Fourier series representation of the signal 𝑠(𝑡) can be written
𝐴0 ∞
𝑠(𝑡) = + ∑ 𝐴 𝑘 cos(𝑘𝜔0 𝑡 − 𝜑𝑘 ) . (3.18)
2 𝑘=1
The numbers 𝐴 𝑘 form the (discrete) frequency spectrum and 𝜑𝑘 the phase shift spectrum of the
signal 𝑠(𝑡).
𝑎0 ∞
𝑠(𝑡) = + ∑ {𝑎 cos(𝑘𝜔0 𝑡) + 𝑏𝑘 sin(𝑘𝜔0 𝑡)}
2 𝑘=1 𝑘
𝑎0 ∞ ej𝑘𝜔0 𝑡 + e−j𝑘𝜔0 𝑡 ej𝑘𝜔0 𝑡 − e−j𝑘𝜔0 𝑡
= + ∑ {𝑎𝑘 − 𝑏𝑘 j }
2 𝑘=1 2 2
132 | 3 Dynamic behavior of networks (signal conditioning)
𝑎0 ∞ 𝑎𝑘 − 𝑏𝑘 𝑗 j𝑘𝜔0 𝑡 𝑎𝑘 + 𝑏𝑘 𝑗 −j𝑘𝜔0 𝑡
= +∑{ e + e }
2 𝑘=1 2 2
𝑎0 ∞ 𝑎𝑘 − 𝑏𝑘 𝑗 j𝑘𝜔0 𝑡 ∞ 𝑎𝑘 + 𝑏𝑘 𝑗 −j𝑘𝜔0 𝑡
= +∑ e +∑ e
2 𝑘=1 2 𝑘=1
2
∞ −1
= 𝑐0 + ∑ 𝑐𝑘 ej𝑘𝜔0 𝑡 + ∑ 𝑐𝑘 ej𝑘𝜔0 𝑡
𝑘=1 𝑘=−∞
∞
= ∑ 𝑐𝑘 ej𝑘𝜔0 𝑡 (3.22)
𝑘=−∞
The sequence of (complex) coefficients 𝑐𝑘 is called the (discrete) complex Fourier spec-
trum. These coefficients can be expressed by their amplitude and phase terms
𝑐𝑘 = 𝐴 𝑘 e−j𝜑𝑘 , (3.25)
with
𝐴 𝑘 = 𝑐𝑘 = √𝑎𝑘2 + 𝑏𝑘2 (3.26)
𝑏
{arctan( 𝑎𝑘 ) 𝑎>0
{
{ 𝑘
{arctan( 𝑏𝑘 ) + 𝜋
{ 𝑎<0
𝜑𝑘 = { 𝜋 𝑎𝑘 (3.27)
{
{ 𝑎 = 0, 𝑏 > 0
{
{2 𝜋
{− 2 𝑎 = 0, 𝑏 < 0
so that the exponential Fourier series becomes
∞
𝑠(𝑡) = ∑ 𝐴 𝑘 ej(𝑘𝜔0 𝑡−𝜑𝑘 ) . (3.28)
𝑘=−∞
This way the Fourier method can be generalized for nonrepetitive signals leading
to a continuous frequency spectrum instead of a sequence of amplitudes 𝑐𝑘 . The spec-
̃ which is called the Fourier transform of 𝑠(𝑡):
trum is denoted by a function 𝑠(𝜔)
∞
̃
𝑠(𝜔) = ∫ 𝑠(𝑡)e−j𝜔𝑡 d𝑡 . (3.30)
−∞
When in applications the frequency 𝑓 is used instead of the angular frequency 𝜔, the
transforms become
∞
̃
𝑠(𝑓) = ∫ 𝑠(𝑡)e−2𝜋j𝑓𝑡 d𝑡 (3.32)
−∞
∞
2𝜋j𝑓𝑡
̃
𝑠(𝑡) = ∫ 𝑠(𝑓)e d𝑓 . (3.33)
−∞
134 | 3 Dynamic behavior of networks (signal conditioning)
F[𝑠] = 𝑠 ̃ (3.34)
−1
𝑠 = F [𝑠]̃ . (3.35)
Linearity
Let 𝑎 and 𝑏 be two (complex) numbers and 𝑠1 (𝑡) and 𝑠2 (𝑡) two signals, then
Time shift
Let be 𝑠(𝑡) a signal and 𝛥𝑡 a time difference, then
Scaling
With the signal 𝑠(𝑡) and any real number 𝑎, then
𝑡 1 𝜔
F[𝑠( )] = × 𝑠(̃ ) . (3.39)
𝑎 |𝑎| 𝑎
3.1 Decomposition of signals | 135
Differentiation
Let be 𝑠(𝑡) a signal, then
d
F[ 𝑠(𝑡)] = j𝜔 × F[𝑠(𝑡)]
d𝑡
̃ , and
= j𝜔 × 𝑠(𝜔) (3.40)
d
F[𝑡 × 𝑠(𝑡)] = j × F[𝑠(𝑡)]
d𝜔
d
=j× ̃ .
𝑠(𝜔) (3.41)
d𝜔
Then we get
1
F[𝑠1 (𝑡) × 𝑠2 (𝑡)] = (𝑠 ̃ ∗ 𝑠 ̃ )(𝜔) (3.43)
2𝜋 1 2
F[(𝑠1 ∗ 𝑠2 )(𝑡)] = 𝑠1̃ (𝜔) × 𝑠2̃ (𝜔) . (3.44)
− 𝑣𝐿 + − 𝑣𝑅 +
𝑖i
+
+ 𝑖𝐿 𝐿 𝑖𝑅 𝑅 + + 𝑖𝑅L +
𝑖𝐶
+ 𝑣𝑅L
𝑣S 𝑣i 𝐶 𝑣𝐶 𝑣o 𝑅L
−
− − − −
−
Fig. 3.4. Example circuit for finding an unknown signal 𝑣o (𝑡) by Fourier transform.
136 | 3 Dynamic behavior of networks (signal conditioning)
Given a (time dependent) signal 𝑣i (𝑡) as input voltage there are nine unknown
quantities; the output voltage 𝑣o (𝑡) at the load resistance 𝑅L , the output current 𝑖o (𝑡),
the voltage 𝑣𝐶 (𝑡) across the capacity 𝐶, the current flowing into the capacity 𝑖𝐶 (𝑡), the
voltage 𝑣𝐿 (𝑡) across the inductivity 𝐿, the current flowing through the inductivity 𝑖𝐿 (𝑡)
as well as the input current 𝑖i (𝑡). Clearly, all quantities are time dependent since the
input voltage is time dependent.
Thus, we need a set of nine independent equations involving these variables.
Three equations are trivially obtained from the quantities’ definitions (see Figure 3.4)
𝑖i (𝑡) = 𝑖𝐿 (𝑡)
𝑖𝐿 (𝑡) = 𝑖𝑅 (𝑡)
𝑣𝐶 (𝑡) = 𝑣o (𝑡) ,
and two equations can be found by applying Kirchhoff’s first (2.1) and second (2.2)
law.
Thus, we have to deal with two differential equations for 𝑣o (𝑡) and 𝑖i (𝑡):
d𝑖i (𝑡)
𝑣i (𝑡) = 𝐿 × + 𝑅 × 𝑖i (𝑡) + 𝑣o (𝑡)
d𝑡
d𝑣 (𝑡) 𝑣 (𝑡)
𝑖i (𝑡) = 𝐶 × o + o .
d𝑡 𝑅L
By applying the properties of the Fourier transform (3.36) and (3.40) the equations are
transformed to
𝑣õ (𝜔)
𝑣ĩ (𝜔) = [𝐿𝜔j + 𝑅] × [𝐶𝜔j × 𝑣õ (𝜔) + ] + 𝑣õ (𝜔)
𝑅L
𝑅 𝐿
= [1 + − 𝐿𝐶𝜔2 + ( + 𝑅𝐶) × 𝜔j] × 𝑣õ (𝜔)
𝑅L 𝑅L
and the the Fourier transform 𝑣õ (𝜔) of the previously unknown 𝑣o (𝑡) is
1
𝑣õ (𝜔) = × 𝑣ĩ (𝜔) . (3.45)
𝑅 𝐿
1+ − 𝐿𝐶𝜔2 + [ + 𝑅𝐶] × 𝜔j
𝑅L 𝑅L
The standard procedure for finding an unknown 𝑣o (𝑡) firstly involves computation
of the Fourier transform 𝑣ĩ (𝜔) of the given 𝑣i (𝑡). Subsequently, the system of algebraic
equations involving the transform 𝑣õ (𝜔) has to be solved. Finally the inverse Fourier
transform thereof has to be computed.
However, there are some difficulties in dealing with this procedure. Although the
computation of the inverse transform is aided by the application of the properties
(3.36) to (3.44) from Section 3.1.1.4 and the knowledge of tabulated transforms, it still
remains very involved. Even more problematic is the fact that the Fourier transform
does not exist for every signal. Until now we have tacitly assumed that all signals can
be transformed. In fact, this is not true. The transforms do only exist for signals where
the limits in (3.29) exist by letting 𝛥𝜔 → 0 and 𝑇 → ∞. It can be shown, that this
is the fact for some signals, but unfortunately, for many important signals this is not
138 | 3 Dynamic behavior of networks (signal conditioning)
the case. This issue can be dealt with to some extent by using the Laplace transform
instead of the Fourier transform (Section 3.1.2).
Last but not least the most serious issue in calculating the behavior of circuits by
means of the Fourier method is that one does not gain anything in understanding the
circuit. Although the response of a circuit to a signal can be calculated exactly, this
does not help in understanding the circuit.
Nevertheless, the transform is a very powerful tool. If only harmonic signals (sine-
and cosine-functions) are considered, the outlined method allows for direct and easy
investigation of the circuits’ transfer characteristics. Thus, one can see at a glance
how a given circuit affects certain frequency components of an applied signal. Since
harmonic functions transform into constants in the frequency domain (possibly with
constant phase shift as well) all that must be done is some simple algebra (given the
familiarity of complex numbers). There is no need for dealing with differential equa-
tions anymore and there is also no need for explicitly performing the transform and
the reverse transform as well. With the generalized complex impedances the calcula-
tion of circuits’ signal behaviors can be done the same way as if it were DC-signals.
This tremendously facilitates understanding, as we will see in the remaining sections
of this chapter.
The computation of the Fourier transform of a signal 𝑣(𝑡) is done by evaluating the
integral
∞
̃
𝑣(𝜔) = ∫ 𝑣(𝑡)e−j𝜔𝑡 d𝑡 . (3.46)
−∞
In 3.1.1.5 it was already stated that the integral does not converge for all possible sig-
nals, i.e., the Fourier transform does not exist. This problem of bad convergence be-
havior is circumvented by adding a term e−𝜎𝑡 to the integrand. This effectively renders
convergence. The resulting transform is called Laplace transform and is a function of
two variables 𝜔 and 𝜎 which can both be referred to as one complex variable 𝑠 with
𝑠 = 𝜎 + j𝜔 . (3.47)
lim ∫ 𝑣(𝑠)
𝑣(𝑡) = 𝜔→∞ ̆ × e𝑠𝑡 d𝑠 (3.49)
𝜎−j𝜔
with a real number 𝜎 so that the contour path of the complex integration is in the
̆ .
region of convergence of 𝑣(𝑠)
Very often the Laplace transformation is denoted
𝑣̆ = L[𝑣] (3.50)
−1
𝑣 = L [𝑣]̆ . (3.51)
voltage. Putting a resistor (in which no phase shift exists between voltage and current)
in series to a capacitor (that has −90° phase shift) into a one-port provides a complex
impedance with a phase shift somewhere between 0° and −90° depending on the con-
tribution of each component to the total impedance. Obviously, the behavior of such
an impedance cannot be described by a single property any more. After all, there are
two different kinds of components involved. It takes a two-dimensional relation to de-
scribe this complex impedance.
The polar presentation, which is based on the two properties magnitude and phase
shift, has the advantage that these two properties are measured in an actual circuit.
This makes this representation very convenient.
Using the two cartesian coordinates (on the real axis and the imaginary axis) in
the complex number plane is an alternative way of presenting an impedance that has
two properties. Although this presentation is more abstract, it is preferred because the
calculational effort is often smaller. It just requires the handling of complex numbers.
It will be the standard presentation of complex quantities in this book. As already
mentioned (Section 3.1.1.2), it is general practice to use j = √−1 as imaginary unit to
avoid confusion with the symbol 𝑖 which is reserved for current.
Thus, the complex admittance 𝑌(j𝜔) of a capacitor has the value 𝑌(j𝜔) = j𝜔𝐶.
A series connection of a capacitor 𝐶 with a resistor 𝑅 is just an addition of the two
1 j
impedances to yield the complex total impedance 𝑍(j𝜔) = 𝑅 + j𝜔𝐶 = 𝑅 − 𝜔𝐶 . 𝑅 is the
1
real part Re[𝑍(j𝜔)] of 𝑍(j𝜔), and − 𝜔𝐶 is referred to as the imaginary part Im[𝑍(j𝜔)]
of 𝑍(j𝜔).
As the inductivity 𝐿 is dual to the capacity 𝐶 we apply duality to get the complex
impedance of an inductor 𝑍(j𝜔) = j𝜔𝐿 and the admittance of a conductor in parallel
j
to an inductor 𝑌(j𝜔) = 𝐺 − 𝜔𝐿 .
From the complex presentation of the impedance 𝑍 = 𝑅 + j𝑋 with the resistance
𝑅 and the reactance 𝑋, the two parameters magnitude and phase shift of the polar
presentation can be derived as follows. The magnitude |𝑍| presenting the ratio of the
voltage peak amplitude 𝑣̂ across the impedance to the peak amplitude of the current 𝑖 ̂
flowing through the impedance is given by
the phase shift 𝜑, i.e., the phase difference between voltage and current is given by
Im[𝑍] 𝑋
𝜑 = arctan = arctan . (3.56)
Re[𝑍] 𝑅
Combining both presentations one gets
𝑋
𝑍 = 𝑅 + j𝑋 = √𝑅2 + 𝑋2 × ej arctan 𝑅 . (3.57)
the polar form is used. Using admittances instead of impedances, when appropriate,
might help in avoiding multiplications and divisions.
Problems
3.9. What does the symbol j stand for?
3.10. Name the two parameters needed to describe fully infinite sinusoidal time de-
pendences.
3.11. Can any two-dimensional mathematical relation be used for the presentation of
infinite sinusoidal time dependences?
3.2.1 Capacitors
From
d𝑣𝐶 (𝑡)
𝑖𝐶 (𝑡) = 𝐶 × (3.58)
d𝑡
follows
𝑇
1 𝑄
𝑉𝐶 (𝑇) = × ∫𝑖𝐶 (𝑡) d𝑡 = 𝐶 . (3.59)
𝐶 𝐶
0
If a capacitor 𝐶 is charged with the charge 𝑄𝐶 , there is a voltage 𝑉𝐶 across it. It is
important to interpret these equations correctly.
– Firstly, the (momentary) current that may flow into a capacitor is unlimited!
– Secondly, if there is no charge in the capacitor the voltage across it is zero. There-
fore, it should not surprise that there is a lag between current and voltage in a
capacitor, called phase shift in steady-state applications. The maximum current
will flow when the voltage is zero and the maximum voltage occurs when the cur-
rent is zero. Thus, the phase shift between voltage and current is −90°.
From equation (3.58), we learn that the current 𝑖𝐶 through a capacitor is proportional
to the time derivative of the voltage 𝑣𝐶 across the capacitor. The transfer function of a
series capacitor in a two-port (which is a transadmittance; see Figure 3.5a) is that of a
differentiator. It is proportional to the angular frequency (Figure 3.5b).
From equation (3.59), we learn that by integration of the current 𝑖𝐶 through a ca-
pacitor we obtain the voltage 𝑣𝐶 across it. The transfer function of a parallel capacitor
in a two-port (which is a transimpedance: see Figure 3.6a) is that of an integrator. It is
inversely proportional to the angular frequency (Figure 3.6b).
Another relation of importance shows that a capacitor stores electric energy 𝐸 in
the form of electric charge
𝑄 2
𝑞 𝑄2 𝐶 × 𝑉𝐶
𝐸𝑄 = ∫ d𝑞 = = . (3.60)
𝐶 2𝐶 2
0
142 | 3 Dynamic behavior of networks (signal conditioning)
𝑌𝐶 (log)
+
𝐶
𝑣i 𝑖o
𝜔(log)
−
Fig. 3.5a. Series capacitor in a two-port with 𝑅L = Fig. 3.5b. Frequency dependence of the transad-
0. mittance, characteristic of a differentiating cir-
cuit.
𝑍𝐶 (log)
𝑖i
+
𝐶 𝑣o
𝜔(log)
−
Fig. 3.6a. Parallel capacitor in a two-port with no Fig. 3.6b. Frequency dependence of the tran-
load (𝑌L = 0). simpedance, characteristic of an integrating cir-
cuit.
The function of a charged capacitor ist that of a temporary voltage source. In a short
enough time interval the voltage stays about the same, independent of the current flow
because for high frequencies, i.e., fast changes, the impedance of a capacitor is very
low.
In steady-state calculations using complex notation, the above mentioned phase
shift of −90° makes the capacitor a purely imaginary admittance 𝑌𝐶 = j𝜔𝐶, or 𝑍𝐶 =
j
− 𝜔𝐶 a purely imaginary negative impedance.
For 𝜔 = 0 the admittance is zero, which means that no direct current (DC) can
flow through the capacitor. For 1/𝜔 = 0 the impedance is zero which means that at
high enough frequencies a capacitor acts as a short-circuit.
However, a real capacitor is not a pure capacitance. The leads will have some re-
sistance (and even inductance) as parasitic series impedance associated with them.
Besides, the dielectric material inside the capacitor has some conductance dissipat-
ing (a very small amount of) electric power. This loss due to this parasitic parallel con-
ductance is expressed by the loss tangent tan 𝛿 which is a parameter of the dielectric
material in the capacitor. This tangent is the ratio of the resistive (lossy) component
and the reactive (lossless) component of the admittance. The reciprocal of the loss
3.2 Frequency dependent linear one-ports | 143
𝑣𝐶
+
OFF ON
𝑖S 𝑣𝐶
− 𝑡
Fig. 3.7a. Charging a capacitor with constant cur- Fig. 3.7b. Time dependence of capacitor voltage.
rent.
Example 3.1 (Charging a capacitor with constant current). In Figure 3.7 a simple switch
allows the current of an ideal current source either to flow into a capacitor 𝐶 (off po-
sition) or alternatively to ground (on position). Let us investigate the behavior of the
circuit in Figure 3.7 at an elementary level. In the on position of the switch, there will
be no voltage 𝑣𝐶 (𝑡) across the capacitor 𝐶, it is discharged. As soon as the switch goes
into the off position, the source current 𝑖S will start flowing into the capacitor.
Using the moment of the switching as a time reference, i.e., 𝑡 = 0, we get
𝑖𝐶 (𝑡 = 0) = 𝑖S , and 𝑣𝐶 (𝑡 = 0) = 0
because it takes charge, i.e., time to build up a voltage across the capacitor. By the
current flow the capacitor collects charge increasing the voltage across the capacitor
proportionally. From (3.58) one gets 𝑖𝑐 (𝑡)d𝑡 = d𝑞𝐶 = 𝐶 × d𝑣𝐶 . With a constant current
𝑖S the charge 𝑄𝐶 and consequently the voltage 𝑣𝐶 across the capacitor 𝐶 increases
linearly with time providing a linear voltage ramp (Figure 3.7b).
This linear voltage increase in time can be an unwanted effect in (active) circuits
when the amount of current charging a (stray) capacitance is limited due to a too high
impedance of the signal source. This effect is called slewing and the slew rate (of am-
plifiers) is typically given in V/μs.
Problem
3.12. A capacitor 𝐶 lies in series to a resistor 𝑅 shunted by a capacitor 𝐶 of the same
size which are situated at the output of a two-port. This circuit can be viewed at as
a complex voltage divider. Give the frequency dependence of the amplitude transfer
function and the phase shift transfer function.
sitic capacitance, which also occurs inside electronic components is called stray ca-
pacitance. Stray capacitances make signals leak between otherwise isolated circuits
(which is termed crosstalk). This can seriously affect the functioning of circuits at high
frequencies.
In macroscopic circuits (unavoidable) stray capacitances are on the order of 1 pF,
seemingly rather small. However, as with any ordinary capacitance their static value is
subject to a dynamic increase if they are part of a parallel feedback loop (Miller effect,
Section 2.4.3.2).
Problem
3.13. Think it over. It is advantageous (smaller energy loss!) to transmit electric power
over very long distances in the form of direct current rather than alternating current.
Why?
3.2.2 Inductors
In Section 1.3.4 we learned that the property that causes a voltage drop that is propor-
tional to a current change is called inductance 𝐿 which is measured in units of henry
(H). The electronic component having this property is called inductor
d𝑖𝐿 (𝑡)
𝑢𝐿 (𝑡) = 𝐿 × (3.62)
d𝑡
and
𝑇
1 𝛷
𝐼𝐿 (𝑇) = × ∫ 𝑣𝐿 (𝑡) d𝑡 = 𝐿 . (3.63)
𝐿 𝐿
0
3.15. A two-port with a conductance of 0.6 S across the input and one of 0.3 S in series
to the output has a load of 0.3 S shunted by an inductor of 2 mH. The source is a 9-mA
current source with an angular frequency of 𝜔 = 5 × 103 s−1 .
(a) Find out (by thinking) if the following pairs of electrical variables are in phase
(whether the phase shift is zero) 𝑣S – 𝑣o , 𝑣o – 𝑖o , and 𝑣S – 𝑖S .
(b) Calculate 𝑖o . (Hint: replace the source plus two-port according to Norton’s theo-
rem.)
Example 3.2 (Identification of (exactly) two linear elements which are not accessible).
Two linear components (e.g., hidden in a black-box) shall be identified by the ampli-
tude and/or phase response of the black-box. Figure 3.8 shows the arrangement cho-
sen to perform the measurements (with an oscilloscope). From the measurements the
following four results are obtained:
𝑣
1. 𝑔𝑣 (𝑓 = 0) = 𝑣o = 0.50
i
2. 𝑔𝑣 (𝑓 = 0.1 MHz) = 0.71/2
3. (𝜑𝑣i − 𝜑𝑖i )(𝑓 = 0.1 MHz) = 45°
4. 𝑔𝑣 (𝑓 = 2 MHz) = 1.00
1 kΩ
𝑣S 𝑣i +
𝑣o
−
− Fig. 3.8. Circuit of Example 3.2.
146 | 3 Dynamic behavior of networks (signal conditioning)
The increase of the voltage gain from 0 to 2 MHz (results #1 and #4) indicates inductive
behavior. Result #4 excludes configurations where a resistor is shunted by an inductor.
Thus, we know that a resistor is in series to an inductor.
In this case, it is clear that the value of the resistor equals that of the external
resistor, namely, 1 kΩ (from result #1) because at 𝑓 = 0 the impedance of the inductor
is zero. The results #2 and #3 are equivalent, i.e, only one of them is necessary. A phase
difference of 45° indicates that the real part and the imaginary part of the complex
impedance are equal 𝜔𝐿 = 2 kΩ. Therefore, the unknown pair of linear one-ports in
the box is a resistor of 1 kΩ in series to an inductor of 10/𝜋 mH. The position of the two
elements within the black-box is electronically irrelevant. Thermography could find
the position of the resistor because the electric power dissipated in it would result in
a local temperature increase.
As time is the conjugate variable to the angular frequency 𝜔, the frequency response
of a circuit may also be specified by parameters describing the time dependence of the
response to a step signal. The following properties describe the time behavior of the
output signal in response to a step signal at the input (see Figure 3.9).
Step response is the time behavior of the output of a (two-port) circuit when its in-
put signal changes from zero to a flat maximum in a very short time. In binary
electronics, this would be a switch from the low state L to the high state H at the
input.
Propagation delay is the time difference between the time when the step occurs and
the moment when the output response reaches half its final value (the very first
time).
𝑣i
𝑣o
Rise time is the time it takes a signal to change from a given low value to a given
high value. These values are usually 10% and 90% of the step height. For negative
going signals, the term fall time is appropriate. When applied to an output signal
of a two-port, both parameters depend both on the rise or fall time of the input
signal and the characteristics of the two-port.
Steady-state error is the difference between the actual final output value after the
circuit reaches a steady state and the one expected for an ideal circuit.
In Figure 3.9 ringing due to the presence of both capacitor and inductor in the circuit
is seen which is described by the following properties:
Overshoot is present when the signal exceeds its expected steady-state amplitude.
Peak time is the time it takes the output signal to reach the first peak of the overshoot.
Settling time is the time elapsed between the occurrence of an ideal step at the input
to the time at which the amplitude of the output signal has settled to its final value.
As an ideal step signal contains all frequencies, a correct transfer by the (active) two-
port would require that the amplitude and the phase of all frequency components are
transferred correctly to give an ideal step signal at the output. From an analysis of
the actual shape of the output signal, information can be gained in one step on the
two-port’s transfer property at all frequencies.
Working with step functions in the mathematical sense (covering minus and plus
infinity) is utterly impractical. Adding two step functions of the same amplitude but
opposite sign and delayed by some time interval 𝑡l results in a single rectangle of
length 𝑡l . In a practical application, such a signal of amplitude 𝐴 would be repeated
in time intervals of length 𝑇. This periodic rectangular wave has a fixed amplitude 𝐴
for some interval 𝑡l (the mark) during the period of length 𝑇 and has the value zero
(the space) for the rest of this period. The length of the mark divided by the period is
called duty cycle 𝑑
𝑡l
𝑑= (3.65)
𝑇
The duty cycle is usually given in percent. A rectangular wave with a duty cycle
of 50% is called square wave. A duty cycle is also defined for nonperiodic signals. It
is the fraction of the total time under consideration in which a device is actively pro-
ducing a signal. The inverse of the period 𝑇 is the repetition frequency 𝑓 = 1/𝑇 [Hz].
Measurements of the step response are performed with rectangular signals having a
long 𝑡l and a not too high 𝑓. Obviously, in practical applications the sharpness of an
ideal step can only be approximated, i.e., the rise time of a signal can never be zero.
Problems
3.16. A steady flow of 10 V high rectangular pulses with a duty cycle of 𝑑 = 0.1 (=
10%) into a resistor of 1 kΩ heats it up.
(a) Determine the average power dissipated in the resistor.
148 | 3 Dynamic behavior of networks (signal conditioning)
(b) Under which circumstances must the maximum dissipated power be considered
rather than the average to determine whether the dissipated power is above the
power rating of the resistor?
3.18. A steady flow of ±1 V high (bipolar) rectangular pulses with a duty cycle 𝑑 =
0.01 (= 1%) into a resistor of 100 Ω heats it up.
(a) Determine the average power dissipated in the resistor.
(b) Explain the difference in dissipative power if the duty cycle is 𝑑 = 99%.
For a change, let us investigate the behavior of the circuit in Figure 3.10 at an ele-
mentary level. In the off position of the switch (Section 2.3.2.1), the capacitor 𝐶 will
discharge. After equilibrium has been reached, no current will flow and both voltages
𝑣𝐶 (𝑡), and 𝑣𝑅 (𝑡) will be zero. As soon as the switch is in the on position current will
flow, and according to Kirchhoff’s second law the voltages for each moment 𝑡 after-
ward are given by the relation
Using Ohm’s law we can express the voltage across the capacitor by
where 𝑖(𝑡) is the current through all elements in the loop. Using the moment of the
step (of the switching) as a time reference, i.e., 𝑡 = 0, we get
𝑣𝑅 (𝑡 = 0) 𝑣S
𝑖(𝑡 = 0) = =
𝑅 𝑅
because it takes charge, i.e., time to build up a voltage across the capacitor. Clearly,
in the first moment, all the voltage is across 𝑅 and thus, the current has above value.
ON 𝑅
+ +
OFF
+
𝑣S 𝑣𝐶
−
Fig. 3.10. Simulating a voltage step signal to charge
− − a capacitor.
3.3 Time domain vs. frequency domain | 149
By the current flow the capacitor collects charge increasing the voltage across it which
in turn decreases the voltage across 𝑅 reducing the current so that at any 𝑡 > 0 the
current is given by
𝑣S (𝑡) − 𝑣𝐶 (𝑡)
𝑖(𝑡) = .
𝑅
On the other hand, we know (3.58) that
with 𝜏 the time constant 𝜏 = 𝑅 × 𝐶. Using the relation 𝑣𝑅 (𝑡) = 𝑣S (𝑡) − 𝑣𝐶 (𝑡) we get
𝑡
𝑣𝑅 (𝑡) = 𝑣S × e− 𝑅𝐶 . (3.67)
The voltage across the capacitor, as the response to a step signal from a real voltage
source, is again some kind of step signal, however, with a finite rise time. The rise time
𝑡rs , the time difference (𝑡0.9 − 𝑡0.1 ) between the time when 10% of the final step size is
reached, and the time when 90% of the final step size is reached, is easily obtained
from
𝑡rs = (ln 0.9 − ln 0.1) × 𝜏 = ln 9 × 𝜏 ≈ 2.2 × 𝜏 . (3.68)
A negative step requires a source with a negative voltage. All above relations are
the same except that the term fall time 𝑡f is used instead of rise time
As an ideal step signal cannot be realized, it is important to know how the re-
sponse is to a real step signal with an intrinsic rise time 𝑡rs . The answer is that rise
times add quadratically
This means that contribution to rise times that are at least a factor of 3 smaller than
the largest, the dominant rise time, may be disregarded accepting a maximal 5% devi-
ation from the correct value. This quadratic addition is very important because signal
generators with finite (well defined) rise times may be used as signal source of step
signals.
As the circuit response is the relationship between the circuit’s output to the cir-
cuit’s input, it is not surprising that a rise time is also assigned to actual circuits.
150 | 3 Dynamic behavior of networks (signal conditioning)
The rise time of a circuit, a device, an instrument is the rise time of the response of the system in
question to a hypothetical ideal step signal at the input.
This rise time can be obtained from the bandwidth 𝐵𝑊 (Section 3.4.2) of an instrument
2.2
by 𝑡rs = 2𝜋×𝐵𝑊 .
In Section 3.3.1, we have seen that a capacitor gets charged (and discharged) by the
current that accompanies a signal flow. If the time constant is short enough, i.e., if
the impedance of the signal source is small enough, the charge is removed “instantly”
as soon as the signal goes to ground. In this case, the charging may be disregarded
and the capacitor behaves just as a passive conductance of j𝜔𝐶. If, on the other hand,
the time constant is long, the signal current charges the capacitor to a level at which,
on average, the charging current equals the discharging current. Observe that if the
source is a voltage source 𝑣S with an impedance 𝑅S the charging current 𝑖𝐶 depends
on the voltage 𝑣𝐶 across the capacitor
𝑣S − 𝑣𝐶
𝑖𝐶 =
𝑅S
which means that only the voltage difference supplies the signal. For this signal, a
(linear) capacitor still behaves as a passive conductance of j𝜔𝐶. This situation can be
described by a (quiescent) operating point on the 𝑖–𝑣 characteristic provided by the
charge condition of the capacitor. In Figure 3.11, a capacitor is shown that is charged
by a voltage source 𝑣S via a source resistor 𝑅S and shunted by the load resistor 𝑅L .
Obviously, some of the source current gets lost by flowing through 𝑅L . What is the best
way to analyze this situation? As we are interested only in the electrical variables in
connection with the capacitor, we can replace the remaining linear network according
to Thevenin’s theorem by a real voltage source with 𝑣Th and 𝑅Th . 𝑣Th is the open-circuit
a 𝑅S b 𝑅Th
+ +
𝑣S 𝐶 𝑅L 𝑣Th 𝐶
− −
Fig. 3.11. Loading a capacitor by a linear network (a) circuit (b) circuit after applying Thevenin’s theo-
rem.
3.3 Time domain vs. frequency domain | 151
𝜔 = 12 500 s−1
− + +
6 μF 3V 3V 3 μF
− 𝑣1 + 𝑣2
20 Ω 10 Ω
𝑖
− Fig. 3.12. Circuit of Problem 3.20.
Problems
3.19. Because of 𝑌𝐶 = j𝜔𝐶 the conductance of a capacitor at 𝜔 = 0 is zero, i.e., no
real DC current can flow into a capacitor. Explain why the voltage of a battery that
is connected to two (equal) capacitors arranged in series is divided (equally) by this
capacitive voltage divider.
3.20. Calculate the amplitude of the following electrical variables from Figure 3.12:
(a) The voltage 𝑣1 across (10 Ω + 20 Ω).
(b) The voltage 𝑣2 across (10 Ω + 3 μF).
(c) The current 𝑖 in the loop.
Example 3.3 (Charging of a capacitor involving a diode). Figure 3.13a presents a circuit
with a fall time considerable less than the rise time. A positive voltage step makes the
(ideal) diode reversed biased so that the time constant 𝜏rs , effective for the rise time,
is 𝜏rs = 𝐶 × 9 kΩ. With a capacitor of 5 nF the rise time 𝑡rs becomes 99 μs. After equi-
librium, the negative voltage step brings the input voltage back to the ground. Now
152 | 3 Dynamic behavior of networks (signal conditioning)
b 𝑣𝐶
a
1 kΩ
𝑡(μs)
100 200 300
c 𝑣𝑅
9 kΩ
𝑣S 5 nF
𝑡(μs)
100 200 300
Fig. 3.13. Illustrating a step response with two different time constants: (a) circuit, (b) time depen-
dence of capacitor voltage, and (c) resistor voltage.
the diode is forward biased so that the time constant 𝜏f , effective for the fall time, is
𝜏f = 𝐶 × 0.9 kΩ. With a capacitor of 5 nF the fall time 𝑡f becomes 9.9 μs, i.e., it is ten
times shorter. This is portrayed in Figure 3.13b which shows the time dependence of
the voltage across the capacitor. In Figure 3.13c the time dependence of voltage across
the resistor(s) is shown. This curve is just the difference between the rectangular in-
put signal and the voltage across the capacitor. The positive and negative spikes in
Figure 3.13c reflect the charging and discharging of the capacitor. One might wonder
why the areas of these spikes are not equal. After all, the same amount of charge en-
ters and leaves the capacitor. Thus, it takes (much) more time to charge the capacitor
than to discharge it as can be seen in Figure 3.13b. If one investigates the time depen-
dence of the current, one will find out that the negative current peak is eleven times
as large as the positive one so that in this picture the areas of the spikes (representing
the charge) are equal, as expected.
Example 3.4 (Charging of a capacitor involving an emitter follower). The reverse bias-
ing of a diode can have serious consequences when it occurs in amplifier stages. Let us
consider the output of an emitter follower (Section 2.5.2.1) that is loaded with a resistor
𝑅E shunted by a capacitor 𝐶 as shown in Figure 3.14a. When charging the capacitor
by means of the emitter current 𝐼E , the transistor is in the amplifying state having a
low output impedance of 𝑍o ≈ 𝑟e = 25 mV/𝐼E (with 𝑍o in Ω and 𝐼E in mA). As seen
from the capacitor, the impedance is 𝑅E shunted by 𝑍o and, therefore, the charging
𝑍o
time constant is 𝜏rs = 𝐶 × 𝑅E × 𝑅 +𝑍 . If 𝑣i falls (suddenly) to zero, the base voltage 𝑣B
E o
is zero whereas the emitter voltage 𝑣E is still positive because the capacitor is charged
positively. Therefore, the base-emitter diode is reverse biased and the transistor is not
3.3 Time domain vs. frequency domain | 153
a 25 mV
𝑍o ≈ 𝐼E (mA)
+
c
𝑅E 𝐶 𝑣o
𝑣o
−
b 𝑍o ≈ ∞
+
𝑅E 𝐶 𝑣o
𝑡
Fig. 3.14. Output situation at an emitter follower with capacitive load: (a) amplifying state of the
transistor, (b) transistor in the cut-off state, and (c) resulting response to a fast rectangular input
signal.
𝑉+
10 kΩ
𝑣i +
+
10 kΩ
𝑅L 𝑣o
𝑣S 2 nF 𝑣o
−
𝑉− −
Fig. 3.15. Basic complementary emitter follower. Fig. 3.16. Circuit of Problem 3.21.
working, it is in the cut-off region. The output impedance of this emitter follower is
that of a reverse biased diode, i.e., it is very high, and the time constant commanding
the fall time is 𝜏f = 𝐶 × 𝑅E . In Figure 3.14b this behavior is sketched.
By means of complementary emitter followers (Figure 3.15), it can be avoided that
the rise and the fall time differ that much. Under no signal condition, the circuit deliv-
ers no output current and consequently 𝑣o = 0. In this case, the base-emitter bias 𝑉BE
of both transistors is zero, so that they are in the cut-off region with both transistors
not conducting. A positive signal that is not sufficient to bias the upper (NPN) tran-
sistor properly (i.e., that is smaller than ≈ 0.65 V) will not take the operating point
into the amplifier region to provide enough current for the output voltage to follow
the input, and the input of the lower (PNP) gets even stronger reverse biased by the
positive going input. The same is true for the lower transistor but for a negative going
154 | 3 Dynamic behavior of networks (signal conditioning)
input. Thus, between about ±0.65 V (for technologies based on silicon) of input, the
circuit does not work as emitter follower resulting in a kink in the output signal for
an input signal crossing from negative to positive or vice versa. This kink is a form of
crossover distortion. To minimize this distortion it is necessary to bias (e.g., by means
of diodes) both transistors in a way that the quiescent operating points of both transis-
tors are in the amplifying region (B-amplifiers Section 2.3.4). Then, the upper emitter
follower will deliver current for the full positive swing of the input signal and the lower
transistor will sink the current for the full negative swing. In this case charging and
discharging of a capacitor shunting the load resistor is done with time constants that
are as equal as the output impedances of the two transistors are equal.
Problem
3.21. Investigate the voltage transfer of a negative 1 V rectangular signal of 10 μs
length through the circuit of Figure 3.16 (assuming an ideal diode).
(a) Determine the time constants for the negative 𝜏− and the positive 𝜏+ slope.
(b) Give the rise time 𝑡rs and the fall time 𝑡f of the output signal.
(c) What is the minimum output voltage?
3.3.2.2 Clamping
Let us investigate the idealized circuit in Figure 3.17a. Without further information,
there are three obvious properties of this circuit.
– The output voltage 𝑣o cannot go negative.
– No current will flow unless the input voltage is negative.
– The charging of the capacitor (after a current flow) will be such that the output
voltage will be positive for a zero input voltage.
Let the input voltage 𝑣i be a square wave oscillating between −3 V and +2 V. During the
time with negative amplitude the capacitor gets charged to 𝑉𝐶 = 3 V, when the signal
is positive no current will flow so that the charge of the capacitor does not change.
𝑣i
+2 V
𝑣𝐶 𝑡
− +
+ + −3 V
𝐶 𝑣o
𝑣i D 𝑣o +5 V
− − 0V 𝑡
Fig. 3.17a. Idealized diode clamp circuit: an ideal Fig. 3.17b. Input signal (𝑣i ) and output signal (𝑣o ).
voltage source in series with a capacitor 𝐶 and an
ideal reverse biased diode D.
3.3 Time domain vs. frequency domain | 155
After the capacitor is completely charged to 3 V (which is promptly only in the ideal
case) the quiescent output voltage is 0 V to which the input signal 𝑣i is added so that
the output voltage is a square wave oscillating between 0 V and 5 V.
Problems
3.22. Can the clamping property of a capacitor explained by its impedance?
3.24. Why does an output capacitor of a DC-voltage-supply get charged despite the
fact that DC-current does not flow into a capacitor?
𝑣i 𝑣o
𝑣max 𝑣max
𝑡 𝑡
Fig. 3.18. Loss of charge in the clamping capacitor distorts output signal.
156 | 3 Dynamic behavior of networks (signal conditioning)
+ +
𝑅D
𝑣S 𝑣i (𝑡) 𝑣o (𝑡) 𝑅L
Fig. 3.19. Unsophisticated DC re-
storer (𝑅D forward impedance of the
− − diode).
𝑉+
a 2×𝐼D
D1 D2 b
𝑍S 𝐶
+ +
−
𝑣S 𝑣i (𝑡) 𝑣o (𝑡) 𝑅L
+
− −
𝐼D
𝑉−
Fig. 3.20. (a) DC restorer with biased diodes. (b) Reducing the impedance of the diodes by the Miller
effect.
In Figure 3.20a a matched pair of diodes D1 and D2 make the output voltage zero if
the same current 𝐼D flows through either diode. This is achieved by having the current
source at the top deliver twice the current (e.g., 0.2 mA) of the lower one (e.g., 0.1 mA).
When the output goes positive D1 gets reversed biased with practically no conductance
so that there is no shunt to the load. When the output goes negative D2 gets cut-off, the
current through D1 increases, which speeds up the discharging of the capacitor. Thus,
the discharging time constant is essentially reduced. The insert in Figure 3.20b shows
how the diode impedance can be further reduced by inclusion into the feedback loop
of an inverting operating amplifier (Section 2.5.2.3).
Another method to avoid undershoots is pole-zero cancelation (see Section 3.4.3.1).
Problems
3.25. What is the advantage of the current source in Figure 3.20a over an appropriately
chosen resistor?
𝑣i 𝑣o
+ +
𝑡 𝑣i (𝑡) 𝑣o (𝑡) 𝑅L 𝑡
− −
Fig. 3.21. Half-wave rectification demonstrated with a sinusoidal signal provided by a transformer.
𝑣i 𝑣o
+ +
𝑡 𝑣i (𝑡) 𝑣o (𝑡) 𝑅L 𝑡
− −
Fig. 3.22. Full-wave rectification demonstrated with a sinusoidal signal provided by a transformer.
158 | 3 Dynamic behavior of networks (signal conditioning)
𝑣i 𝑣o
+ +
𝑡 𝑣i (𝑡) 𝐶 𝑣o (𝑡) 𝑅L 𝑡
− −
Fig. 3.23. Principle of a voltage power supply based on full-wave rectification with a smoothing
capacitor 𝐶: (a) input voltage; (b) circuit; and (c) output voltage 𝑣o with ripple.
The correct size of the smoothing capacitor depends on the load (the minimum
load resistor) so that a long discharging time constant keeps the amplitude of the rip-
ple at an acceptable level. However, the conductance of the capacitor is proportional
to its size so that the peak current in the transformer secondary and the diodes will be
enhanced correspondingly. In practice, it will be determined by the output impedance
of the transformer. Using an active voltage regulator circuit (operation amplifier, Sec-
tion 2.5.2.2) in cascade to the reservoir capacitor, allows an essential reduction of the
capacity value with improved ripple performance.
Problem
3.27. When trying to understand how a circuit involving a diode works, is it better to
deal with the diode current or voltage?
D1 𝐶1 𝑣o1 𝑅L1
− 𝐶1 D1
+ +
D2 𝐶2 𝑣o2 𝑅L2 𝑣i D2 𝐶2 𝑣o
− −
Fig. 3.24. Diode voltage doubler with a tap provi- Fig. 3.25. Diode voltage doubler of cascade type
sion. (𝑣o = 2 × 𝑣ipeak ).
𝑣i
−
Fig. 3.26. Principle of a Cockcroft–Walton volt-
− 𝑣o + age quadrupler (cascaded voltage doubler).
This circuit can also be explained statically with the help of Section 3.3.2.2 (clamp
circuits) and Section 3.3.2.4 (half-wave rectifiers). 𝐶1 and D2 form a clamp circuit that
lifts the signal zero-line by 𝑣peak . D1 and 𝐶2 constitute a half-wave rectifier which rec-
tifies the clamped input signal.
Figure 3.26 shows the principle of a voltage quadrupler. Voltage multipliers obey-
ing this principle are also called Cockcroft–Walton circuits. These circuits are capable
of producing an output voltage that is tens of times the peak AC input voltage at a quite
limited output current rating.
Problem
3.28. Why can voltage multiplying not be explained without considering the current
through the diodes?
ON
+
OFF 𝑖𝐺 𝑖𝐿
𝑖S 𝐺 𝐿 𝑣𝐿
Fig. 3.27. Simulating a current step signal to charge
− an inductor.
The behavior of this circuit is dual to that in Figure 3.10. Consequently, we just
present the relevant equations without further comment
𝑖S = 𝑖𝐺 (𝑡) + 𝑖𝐿 (𝑡)
𝑖𝐿 (𝑡) = 𝑖S − 𝐺 × 𝑣(𝑡)
𝑖 (𝑡 = 0) 𝑖S
𝑣(𝑡 = 0) = 𝐺 =
𝐺 𝐺
𝑖S − 𝑖𝐿 (𝑡)
𝑣(𝑡) = .
𝐺
From (3.62)
d𝑖𝐿 (𝑡) 𝑖S − 𝑖𝐿 (𝑡)
𝑣𝐿 (𝑡) = 𝐿 × =
d𝑡 𝐺
and with 𝑖𝐿 (𝑡 = 0) = 0 we get
𝑡 𝑡
𝑖𝐿 (𝑡) = 𝑖S × (1 − e− 𝐺𝐿 ) = 𝑖S × (1 − e− 𝜏 ) (3.71)
with 𝜏 the time constant 𝜏 = 𝐺 × 𝐿. Using the relation 𝑖𝐺 (𝑡) = 𝑖S − 𝑖𝐿 (𝑡) we get
𝑡
𝑖𝐺 (𝑡) = 𝑖S × e− 𝜏 . (3.72)
The current in the inductor as a response to a step signal from a real current source
is again some kind of step signal, however, with a finite rise time. This rise time 𝑡rs is
again
𝑡rs = (ln 0.9 − ln 0.1) × 𝜏 = ln 9 × 𝜏 ≈ 2.2𝜏 . (3.73)
The frequency dependence of a two-port comprises both the amplitude and the phase
shift response of all four two-port parameters. They must be given, e.g., by using the
complex notation. The presentation of input and output impedance does not differ
from the presentation used for the impedance of a one-port (Section 3.2). In addition,
there is the (forward) transfer function and the reverse transfer function. The small-
ness of the latter justifies the usual practice of not considering it. Of the four (forward)
3.4 Dynamic response of passive two-ports (passive filters) | 161
transfer parameters (Section 2.2.2), we choose the complex voltage transfer function
𝐺𝑣 (j𝜔)
𝐺𝑣 (j𝜔) = Re[𝐺𝑣 (j𝜔)] + j × Im[𝐺𝑣 (j𝜔)] = 𝑔𝑣 (𝜔) × ej𝜑(𝜔) , (3.74)
with the magnitude 𝑔𝑣 (𝜔)
2 2
𝑔𝑣 (𝜔) = 𝐺𝑣 (j𝜔) = √Re[𝐺𝑣 (j𝜔)] + Im[𝐺𝑣 (j𝜔)] (3.75)
a c e
+ + + + + +
𝑣i 𝑣o 𝑣i 𝑣o 𝑣i 𝑣o
− − − − − −
b d f
+ + + + + +
𝑣i 𝑣o 𝑣i 𝑣o 𝑣i 𝑣o
− − − − − −
Fig. 3.28. The six basic simple frequency dependent voltage dividers.
frequency components of signals. And (e) and (f) are resonance filters, because their
components form a series resonant circuit that is tuned to its resonance frequency.
Even if a pair of filters has the same frequency response function, they are not
identical in all their properties. This can best be seen, e.g., by comparing the output
impedances 𝑍o at very low and at very high frequencies.
Problems
3.30. Determine the input impedance 𝑍i of each of the 6 voltage filters from Fig-
ure 3.28 for 𝑓 = 0.
3.31. Determine the input impedance 𝑍i of each of the six voltage filters from Fig-
ure 3.28 for 1/𝑓 = 0.
3.32. Determine the output impedance 𝑍o of each of the six voltage filters from Fig-
ure 3.28 for 𝑓 = 0. (As shown, with an open-circuit at the input.)
3.33. Determine the output impedance 𝑍o of each of the six voltage filters from Fig-
ure 3.28 for 1/𝑓 = 0. (As shown, with an open-circuit at the input.)
a c e
𝑖i 𝑖i 𝑖i
𝑖o 𝑖o 𝑖o
𝑖i 𝑖i 𝑖i
b d f
𝑖i 𝑖i 𝑖i
𝑖o 𝑖o 𝑖o
𝑖i 𝑖i 𝑖i
Fig. 3.29. The six basic simple frequency dependent current dividers.
Problems
3.34. Determine the input impedance 𝑍i of each of the six current filters from Fig-
ure 3.29 for 𝑓 = 0.
3.35. Determine the input impedance 𝑍i of each of the six current filters from Fig-
ure 3.29 for 1/𝑓 = 0.
3.36. Determine the output impedance 𝑍o of each of the six current filters from Fig-
ure 3.29 for 𝑓 = 0. (As shown, with an open-circuit at the input.)
3.37. Determine the output impedance 𝑍o of each of the six current filters from Fig-
ure 3.29 for 1/𝑓 = 0. (As shown, with an open-circuit at the input.)
The shape of the phase and amplitude responses is the same for all four low-pass filters
except that one-half deals with voltage and the other half with current. Following the
general trend we pick the voltage RC low-pass filter as example. Such a filter is shown
in Figure 3.28a.
The product of resistance 𝑅 and capacitance 𝐶 gives the time constant 𝜏 of the
filter. The turnover frequency 𝑓u (in Hz), is determined by the time constant:
1 1
𝑓u = = (3.77)
2𝜋𝜏 2𝜋𝑅𝐶
or better by using the angular frequency 𝜔u (in units of radians/s, or simply s−1 ):
1 1
𝜔u = = . (3.78)
𝜏 𝑅𝐶
164 | 3 Dynamic behavior of networks (signal conditioning)
a
0 dB 1.0
attenuation
−20 dB/decade
20 dB 0.1
40 dB 0.01
pass band 𝑓u stop band
b
0°
−15°
phase response
−30°
−45°
−60°
−45°/decade
−75°
−90°
1 10 100 1000 10 000
frequency (Hz)
Fig. 3.30. (a) Frequency dependence of the amplitude response (Bode plot) of a simple low-pass fil-
ter together with the straight-line approximation. (b) Frequency dependence of the phase response
(Bode plot) of a simple low-pass filter together with the straight-line approximation.
From Figure 3.30a it can be seen that the frequency response of the amplitude
transfer is smooth. It can be linearly approximated by two straight lines intersecting
at the (upper) corner frequency 𝜔u (cut-off or break frequency). The frequencies be-
low the corner frequency are in the pass-band, those that are above are in the stop-
band. The width of the pass-band is called bandwidth 𝐵𝑊. The rate of frequency roll-
off in the stop-band has a slope −20 dB per decade that equals −6 dB per octave. At
the corner frequency, the actual amplitude response is lower by 3.01 dB, i.e., a (volt-
age or current) signal at the output having the corner frequency will be attenuated
to 1/√2 ≈ 0.7071 of its input value. This means that the input power will be halved
(hence −3 dB), with the transmission of power through the circuit declining further
with increased frequency. As discussed in Section 3.2.1 the low-pass filter has inte-
grating properties for frequencies in the stop-band.
The phase response curve can also be approximated by (three) straight lines. At
the corner frequency the curve and the linear approximation intersect at −45°. In the
linear approximation, the phase shift is zero at 𝜔 = 0.1𝜔u , and −90° at 𝜔 = 10𝜔u ,
i.e., the slope is −45° per decade.
Such maximally flat magnitude filters have as flat a frequency response as possible
in the pass-band. They are called Butterworth filters. Its order 𝑛 is given by the steep-
3.4 Dynamic response of passive two-ports (passive filters) | 165
ness of the slope of the frequency response in the stop-band: it is 𝑛×20 dB per decade.
Thus, the simple RC filter is a first-order Butterworth filter.
Using the complex notation the calculation of the transfer function is straightfor-
ward. In the voltage filter, the two elements form a voltage divider, in the current filter
a current divider. From the divider equation
𝑣o (j𝜔) 𝑍2
𝐺𝑣 (j𝜔) = = , (3.79)
𝑣i (j𝜔) 𝑍1 + 𝑍2
with 𝑍1 = 𝑅 and 𝑍2 = 1/j𝜔𝐶 it becomes
1 1
𝐺𝑣 (j𝜔) = = × e−j arctan 𝜔𝜏 . (3.80)
1 + j𝜔𝜏 √1 + (j𝜔𝜏)2
The first factor is the amplitude term |𝐺𝑣 | (shown in Figure 3.30a), and the phase term
(shown in Figure 3.30b) is 𝜑(𝜔) = − arctan 𝜔𝜏. A fast check can be made for the values
with 𝜔𝜏 = 1. There, the amplitude term is |𝐺𝑣 | = 1/√2 = 0.701 = −3 dB and the phase
term is 𝜑 = −45° as shown in the plots. Considering the property of the capacitor
(it must get charged before a voltage occurs), it is obvious that there is a lag between
input and output voltage. Consequently, the phase shifts in Figure 3.30b are negative.
Problems
3.38. A low-pass filter has a time constant 𝜏.
(a) Give the corner frequency 𝑓u of the frequency response function.
(b) Give the value of the amplitude transfer function 𝑔𝑣 at 10𝑓u .
(c) Give the value of the phase angle 𝜑 at 10𝑓u , both exact and using the straight line
approximation.
3.40. The corner frequency 𝑓u of a low-pass filter is 𝑓u = 0.35 MHz. Determine the
rise time of the output signal when
(a) an ideal step function is applied to the input, and when
(b) a signal of the shape of the output signal of (a) is applied to the input.
3.41. Into the input of a simple low-pass filter consisting of a resistor 𝑅 = 10 kΩ and
a capacitor 𝐶 = 2 nF a rectangular signal, 10 V high and 10 μs long, is fed.
(a) What is the maximum output voltage 𝑣omax ?
(b) What is the rise time of the output signal?
(c) What is the fall time of the output signal?
3.42. A step function is applied to the input of the circuit of Figure 3.31. How long
does it take for the output voltage to reach 6.32 V?
166 | 3 Dynamic behavior of networks (signal conditioning)
𝐶1
+
100 Ω
+
𝜔 𝑅 𝑣𝑅
+
10 V 100 Ω 10 μF 𝑣o
−
−
− 𝐶2
Fig. 3.31. Circuit of Problem 3.42. Fig. 3.32. Circuit of Problem 3.43.
3.43. Determine in Figure 3.32 the angular frequency at which the voltage across the
resistor is twice that of the voltage across each of the capacitors.
1 1
𝑓l = = (3.81)
2𝜋𝜏 2𝜋𝑅𝐶
3.4 Dynamic response of passive two-ports (passive filters) | 167
a
0 dB 1.0
attenuation
+20 dB/decade
20 dB 0.1
40 dB 0.01
stop band 𝑓l pass band
b
90°
75°
phase response
60°
45°
30°
−45°/decade
15°
0°
1 10 100 1000 10 000
frequency (Hz)
Fig. 3.33. (a) The frequency dependence of the amplitude response (Bode plot) of a simple high-
pass filter together with its straight-line approximation. (b) The frequency dependence of the phase
response (Bode plot) of a simple high-pass filter together with its straight-line approximation.
The first factor is the amplitude term |𝐺𝑣 (j𝜔)| (Figure 3.33a), and arctan(1/𝜔𝜏) is
the phase term 𝜑(𝜔) (shown in Figure 3.33b). A fast check can be made for the values
with 𝜔𝜏 = 1. There, the amplitude term is 𝑔𝑣 = |𝐺𝑣 | = 1/√2 ≈ 0.7071 = −3 dB and
the phase term is 𝜑 = 45° as shown in the plots.
It is quite obvious that for those filters in which 𝑍2 is dissipative (a resistor, or a
conductor, respectively) the phase response of the transfer function is identical with
that of the input impedance (or admittance, respectively) because the output voltage
(or current, respectively) is proportional to the input current (or voltage, respectively).
This is also the reason for the 90° phase shift at lower frequencies. It just reflects the
phase shift between voltage and current through the capacitor.
Problems
3.47. Design the dual counterpart of a voltage high-pass RC-filter.
3.48. The input impedance 𝑍i of a simple voltage high-pass filter with a time constant
of 𝜏 = 10 μs is 𝑍i = 14.14 kΩ at the angular frequency 𝜔x = 105 s−1 .
(a) Give the two circuits (including the correct values of the components) that fulfill
above requirements.
(b) Give at 𝜔x , for both circuits, the phase shift between the output voltage 𝑣o and the
current 𝑖o through the element situated parallel to the output.
(c) Give at 𝜔x , for both circuits, the phase shift between the input voltage 𝑣i and the
input current 𝑖i .
𝐶2
𝑣i 𝑣o
+ +
𝑣i 𝑅2 𝑣o
− −
𝑡 𝑡
𝑅pz
𝑣i 𝑣o
+ +
𝐶2
𝑣i 𝑅2 𝑣o
− −
𝑡 𝑡
Fig. 3.34b. A resistor shunting the capacitor of the filter restores unipolarity (removes undershoot).
Problems
3.51. What is the purpose of pole-zero-cancellation?
3.52. How must the time constants of the two filter sections producing a double-
differentiated signal be chosen so that double differentiation has the least effect on
the baseline?
a
1.0
amplitude response
signal power
−3 dB
0.5
bandwidth
0.0
𝑓l 𝑓u
b
0 dB 1.0
attenuation
20 dB 0.1
40 dB 0.01
stop band 𝑓l pass band 𝑓u stop band
c
90°
45°
phase response
0°
−45°
−90°
1 101 102 103 104 105 106
frequency (Hz)
Fig. 3.35. Typical frequency response of the transfer function of a band-pass filter with lower −3 dB
corner frequency 𝑓l and upper −3 dB corner frequency 𝑓u .
𝑅 𝑅
+ 𝐶 𝐶 +
𝑣i 2𝐶 𝑣o
𝑅/2
− −
At 𝜔n the phase shift 𝜑 is zero, at low frequencies the equation for the phase shift 𝜑
degenerates to
𝜑(𝜔) = arctan(−5𝜔𝜏)
so that the phase shift is zero at 𝜔 = 0 (as expected due to the transmission through
the two resistors), and at high frequencies the equation for the phase shift degenerates
to
1
𝜑(𝜔) = arctan (− )
𝜔𝜏
so that the phase shift is zero at 1/𝜔 = 0 (as expected due to the transmission through
the two capacitors forming a short-circuit).
Up to now we have omitted voltage and current division by the pure imaginary one-
ports inductor 𝐿 and capacitor 𝐶. If they are combined, they form an LC circuit, also
called a resonant, tank, or tuned circuit. It is helpful to study first this pure form to
gain a good understanding before including unavoidable stray properties present in
the actual components resulting in so called RLC circuits.
LC circuits are not only used as filters (selecting a signal at a specific frequency
from a complicated signal) but more so in harmonic oscillators (Section 4.2) for gen-
erating sinusoidal signals at a particular frequency. If both reactive elements in a fil-
ter are of the same nature, one gets frequency independent division because the fre-
quency dependences drop out.
Problems
3.53. Calculate the transfer function of a voltage divider consisting of two inductors.
3.54. Calculate the transfer function of a voltage divider consisting of two capacitors.
3.55. Calculate the transfer function of a current divider consisting of two inductors.
3.56. Calculate the transfer function of a current divider consisting of two capacitors.
the imaginary part of the impedance becomes zero, and the impedance is real. Current
and voltage are in phase, and the current is maximal. The frequency at which the phase
shift vanishes (at which the impedance becomes real) is called resonant frequency 𝑓r
or resonant angular frequency 𝜔r .
Only at 𝜔r the total impedance 𝑍 will be zero and otherwise nonzero. Below 𝜔r
the circuit is capacitive, above 𝜔r the circuit is inductive. Therefore, the series LC cir-
cuit, when connected in series with a resistive load, will act as a narrow voltage band-
pass filter having zero output impedance at the resonant frequency of the LC circuit.
Whereas for the ideal series resonant circuit the position of the minimum (input) im-
pedance is at the resonant frequency it moves to higher frequencies with increasing
(resistive) loss in the components of the resonant circuit.
Problem
3.57. An AC voltage source of 200 V feeds a series RLC resonant circuit. At 50 kHz the
current is 5 mA and has a phase shift of 0° to the voltage. At 100 kHz the current is
3 mA. Identify the three elements R, L, and C.
Example 3.5 (Comparison of the two types of resonant circuits made of real compo-
nents). A real inductor has a rather pronounced series resistance represented by 𝑅s ,
and a real capacitor has a rather insignificant parallel conductance represented by 𝐺p .
Including these stray parameters into our resonant circuits we get the circuits of Fig-
ure 3.37. As these circuits are strictly dual to each other, it suffices to investigate just
one of them. Observe, in particular, that 𝑅s is in series to the serial resonant circuit
and 𝐺p in parallel to the parallel resonant circuit. Being real themselves they do no
𝑅s
b
𝐿 𝑅s
𝐶 𝐺p
𝐶 𝐺p 𝐿
Fig. 3.37. Equivalent circuits of the two types of resonant circuits made of real components. (a) se-
ries resonant circuit and (b) parallel resonant circuit.
174 | 3 Dynamic behavior of networks (signal conditioning)
affect the condition for resonance. So we conclude that the resonant frequency of the
series resonant circuit is independent of 𝑅s and the dual version independent of 𝐺p .
The impedance of the series resonant circuit is given by
1 1
𝑍(j𝜔) = 𝑅s + j𝜔𝐿 + = 𝑅s (1 + j𝜔𝜏1 ) + (3.89)
𝐺p + j𝜔𝐶 𝐺p (1 + j𝜔𝜏2 )
𝐿 𝐶
with 𝜏1 = 𝑅s
and 𝜏2 = 𝐺p
. Separating the imaginary part of 𝑍(j𝜔) we get
𝜔𝜏2
Im[𝑍(j𝜔)] = 𝑅s 𝜔𝜏1 − (3.90)
𝐺p (1 + 𝜔2 𝜏22 )
and making it zero we get for the resonant frequency of the serial circuit
2
1 𝐺p 1 1
𝜔s = √ −( ) =√ − 2. (3.91)
𝐿𝐶 𝐶 𝐿𝐶 𝜏2
The dual version for the resonant frequency of the parallel circuit is
1 𝑅 2 1 1
𝜔p = √ − ( s) = √ − . (3.92)
𝐿𝐶 𝐿 𝐿𝐶 𝜏12
The series resonant circuit is with regard to the resonant frequency the better
choice. As 𝜏2 is generally much larger than 𝜏1 , the resonant frequency depends less
on stray properties. This is important, as their temperature dependence makes the
resonant frequency shift with temperature.
The selectivity of a resonant filter depends on the quality factor 𝑄 of the circuit.
For the serial arrangement, it is the ratio of Im[j𝜔𝐿] to the total impedance of the
circuit at 𝜔r :
𝜔r 𝐿 𝜔r 𝐿 𝜔r 1
𝑄= = = = . (3.93)
𝑍(𝜔r ) 1 1 1 1 1 1 1
𝐿( + ) + ( + )
𝜏1 𝜏2 𝜏1 𝜏2 𝜔r 𝜏1 𝜏2
The inverse is easier to remember:
1 1 1 1
= × ( + ). (3.94)
𝑄 𝜔r 𝜏1 𝜏2
As 𝜏1 is dual to 𝜏2 and vice versa, this equation applies for both resonant circuits.
As the loss tangent (Section 3.2.1) of high quality capacitors and consequently 𝐺p
can be very small, the resonant frequency of serial resonant circuits is that of the ideal
circuit (3.88) and the quality factor 𝑄s becomes
1 𝐿 𝐿 1
𝑄s = √ =√ . (3.95)
𝐿𝐶 𝑅s 𝐶 𝑅s
3.4 Dynamic response of passive two-ports (passive filters) | 175
Obviously, it is essential that the series resistor of the inductor is small. There are
practical limits to the increase of 𝐿. An increase in 𝐿 will significantly increase 𝑅s , too,
reducing the net benefit. A decrease in 𝐶 is limited by the existence of stray capaci-
tances that should not become too eminent.
The quality factor 𝑄 reflects the bandwidth of the resonant filter. For 𝑄 > 10, the
bandwidth 𝐵𝑊 divided by the resonant frequency 𝑓r = 𝜔r /2𝜋 is very well given by
1/𝑄
1 1 1
𝐵𝑊 ≈ ( + )× .
𝜏1 𝜏2 2𝜋
As the quality factor of RLC resonant circuits is of the order of 102 , the condition
𝑄 > 10 would usually be fulfilled. For a series resonant circuit, the value of 𝐵𝑊
1
degenerates to 𝐵𝑊 ≈ 2𝜋𝜏 .
1
Problems
3.58. A voltage-dependent current source (𝑔m = 2 mA/V, 𝑍o = 0.5 MΩ) feeds a par-
allel resonant circuit (𝐶 = 200 pF, 𝐿 = 0.12 mH, 𝑅s = 10 Ω). Determine
(a) the resonant frequency 𝑓r ,
(b) the admittance of the circuit at the resonant frequency,
(c) the voltage gain 𝑔𝑣 at the resonant frequency, and
(d) the quality factor 𝑄 of the resonant circuit.
3.59. The capacitor 𝐶 of a parallel resonant circuit is 𝐶 = 318 pF, the series resistor
𝑅s of the inductor is 𝑅s = 10 Ω and its resonant frequency 𝑓r = 1 MHz.
(a) Determine the quality factor 𝑄.
(b) Which conductance 𝐺p of the capacitor 𝐶 gives the same 𝑄 as 𝑅s of the inductor?
3.61. Calculate the voltage transfer function of a resonant filter section consisting of
a resistor 𝑅 in series to an ideal parallel resonant circuit.
(a) Give the complex transfer function.
(b) Give the frequency dependence of the amplitude.
(c) Give the frequency dependence of the phase shift.
Solve by thinking:
(d) At which frequency is the phase shift 0° between 𝑣o and 𝑣i ?
(e) What value has the input impedance 𝑍i at this frequency?
(f) What value has the output impedance 𝑍o at this frequency?
176 | 3 Dynamic behavior of networks (signal conditioning)
a b
+ + + +
𝑣i 𝑣o 𝑣i 𝑣o
− − − −
Fig. 3.38. Chain of three low-pass RC voltage filter sections (a) directly connected, (b) isolated by
means of buffer amplifiers with a voltage gain 𝑔𝑣 = 1.
Because of lack of isolation between input and output loading of a filter section must
always be considered. Consequently, as in the case shown in Figure 3.38a for low-pass
filter sections, all of the circuit must be analyzed as a whole, not section by section. In
Figure 3.38b isolating amplifiers (voltage followers) isolate each section from the next
one so that the overall response is the cumulative response of the single sections.
Problem
3.62. Are the input variables of a two-port dependent on the output variables?
𝜑(𝜔)
360° 𝜑(𝜔)
𝜖(𝜔)
180°
a b
𝑣(𝑡) 𝑣(𝑡)
𝑡 𝑡
Fig. 3.40. Distortion of a single rectangular signal transmitted through a two-port; (a) insufficient
amplitude bandwidth, and (b) insufficient phase bandwidth.
quencies. As the number of filter sections increases, the effective bandwidth is nar-
rowed on cascading. The phase distortion at the corner frequency of amplitude trans-
fer increases so much that the frequency response of the phase shift starts to dominate
the transient behavior. Thus, the term phase bandwidth was introduced defining the
frequency range where the absolute value of the difference in phase shift from a linear
dependence is less than 45° (𝜋/4). Depending on the application, another definition
of the phase bandwidth is preferred. It is the width of the frequency range over which
the phase-vs.-frequency characteristic deviates from linearity by less than 0.5 (about
28.7°, Figure 3.39).
Figure 3.40 gives an impression of how a rectangular signal gets distorted due to
an imperfection in the amplitude transfer, and in the phase transfer.
Problems
3.63. Calculate the phase deviation of a low-pass filter from a linear phase at the cor-
ner frequency.
3.64. A simple voltage low-pass filter consists of a resistor 𝑅 = 1 kΩ, and a capacitor
𝐶 = 10 nF.
(a) Give the (amplitude) bandwidth (in Hz).
(b) Give the phase bandwidth (in Hz).
𝑡
0.2𝜏 𝜏 4𝜏 8𝜏
Fig. 3.41. Response of
b an RC combination to
rectangular voltage
signals of various
lengths: (a) rectan-
𝑡
gular voltage input
signal, (b) voltage
c
signal across R (high-
pass filter), and (c)
voltage signal across
𝑡 C (low-pass filter).
Table 3.1. Maximum fraction of the peak voltage across the capacitor in dependence of the signal
length 𝑡l .
capacitor. This fact can also be interpreted, that across 𝑅 there is no voltage contribu-
tion with the frequency component 𝜔 = 0, i.e., no DC component. It does not come as
a surprise, that the capacitor has conductance zero at this frequency.
It is worthwhile to remember that in Figure 3.41c short signals do not attain the
full height of the step. Table 3.1 lists the maximum height for several signal lengths.
Problems
3.65. A simple voltage low-pass filter has a time constant of 10 μs. Its input impedance
at 𝜔 = 0 is 10 kΩ.
(a) Identify the (two) linear components of the filter.
(b) What is the maximum output voltage if a single positive 10 V rectangular signal
of length 𝑡l = 10 μs is fed into the input?
(c) Sketch the linear approximation of the frequency response of the amplitude trans-
fer and of the phase shift
(d) How much does the amplitude of the voltage across the inductor drop within the
first 10 μs if a single positive 10 V step signal is fed into the input?
3.4 Dynamic response of passive two-ports (passive filters) | 179
1 𝑅 2 1 1 𝑅2
𝜔p = √ − ( L) = √ × √ − L . (3.97)
𝐿𝐶 𝐿 𝐿 𝐶 𝐿
For 1/𝐶 = 𝑅2L /𝐿, i.e., 𝐿/𝑅L = 𝐶/𝐺L , i.e., 𝜏1 = 𝜏2 the frequency becomes zero, i.e.,
there is no oscillation. This condition is called critical damping. The circuit is over-
damped if 𝐿/𝐶𝑅2 is less than one, otherwise under-damped. Table 3.2 lists several
specific choices of 𝐿 together with the resulting performance of the circuit. Figure 3.44
shows the improvement in the rise time of a step function through shunt compensa-
tion.
𝑅L
𝐺o 𝐶o 𝑅L 𝐺o 𝐶o
Fig. 3.42. Output condition at a dependent cur- Fig. 3.43. The load impedance is increased by
rent source with load resistor. The stray capaci- adding an inductor in series to the load resistor.
tance is shown dashed. The stray capacitance is shown dashed.
180 | 3 Dynamic behavior of networks (signal conditioning)
Table 3.2. Listing of 𝐿/𝐶𝑅2 -values and the resulting performance of the RLC circuit.
𝐿/𝐶𝑅2 Performance
0 𝑡rs = 2.2𝑅𝐶, no shunt compensation
0.25 𝑡rs = 1.65𝑅𝐶, no overshoot (Section 3.3)
1 one overshoot, critical damping
1 + √2 Maximally flat frequency response of amplitude transfer
≫1 oscillation at the natural frequency
1
0.9
𝑣o (V)
0.5
0.1
0
−10 0 10 20 30 40 50 60 70 80 90 100
𝑡 (ns)
Fig. 3.44. Improvement of the rise time of a step signal by shunt compensation.
In Section 2.2.2 we learnt that electronic two-ports do not isolate the input from the
output which means, e.g., that the two forward parameters, input impedance and (for-
ward) transfer parameter are functions of the amount of load at the output and the re-
verse parameters are functions of the amount of “load” at the input, i.e., of the source
impedance. To minimize this dependence, it is necessary to feed current inputs from
current sources and voltage inputs from voltage sources. Likewise, high impedance
loads should be fed by a dependent voltage source at the amplifier output, whereas
high admittance loads call for a dependent current source at the output of the am-
plifier. This method is called impedance bridging which means, in the case of voltage
signals, that the load impedance is much higher than the source impedance. This way,
a maximum of voltage is transferred at less power consumption by reduced currents.
Whereas this impedance mismatch strongly supports isolation of the source from the
amplifier and of the amplifier from the load, sometimes the need of maximum power
transfer overrides these considerations.
In Section 2.2.4.1, we learnt that maximum power is transferred from one element
to another when in each element the same power is dissipated, e.g., if in a voltage
3.5 Interfacing (cascading) | 181
divider the divided voltage is one-half of the total voltage. If this division is done by
resistors, both impedances will have the same value (𝑅1 = 𝑅2 ), Thus, the term im-
pedance matching. Applying power matching to complex impedances the impedance
matching must be generalized to
𝑍1 = 𝑍⋆2 (3.98)
where ⋆ indicates the complex conjugate. For purely real impedances, this degenerates
to 𝑅1 = 𝑅2 , as we already know.
The load impedance must be the complex conjugate of the source impedance to achieve optimal
power transfer between them.
It would take nearly an encyclopedia to list and fully describe all possible combina-
tions of the three types of single stages of the three-terminal components belonging
to the three semiconductor technology families (in both polarities)
– bipolar junction transistor (BJT),
– junction-gate field-effect transistor (JFET), or
– metal-oxide-semiconductor field-effect transistor (MOSFET).
In view of the myriads of integrated amplifiers available, the importance of this sec-
tion does not lie in designing but just in understanding such two-stage combinations.
Connecting single stages involves two steps
– setting the (quiescent) operating points, and
– optimizing the small-signal behavior.
Biasing individual stages depends strongly on the technology family, which can make
mixing components of different families trying. Although the correct dimensioning
with regard to the operating point is of high importance when designing a circuit,
one can more or less disregard these complications when one just tries to understand
the dynamic behavior of a circuit. In that case, the choice of the technology family is
less important, too, because the vital difference between these components lies in the
biasing of the devices.
Therefore, we settle for bipolar junction transistors (mixing both polarities). How-
ever, even if the common-emitter circuit is plainly the basic circuit, we will be using the
182 | 3 Dynamic behavior of networks (signal conditioning)
a +V c +V1
𝑅1 𝑅2 𝑅1 𝑅2
+ +
+ Q1 Q2 𝑣o + Q1 Q2 𝑣o
𝑣i 𝑣i
− − − −
b +V
–V2
𝑅1 𝑅2
+ Q1 Q2 𝑣o
𝑣i
− −
Fig. 3.45. Effect of the interface between two amplifier stages on the signal transmission and the
bias setting: (a) direct coupling (b) resistive coupling (with and without) shunting capacitor, and
(c) nonlinear coupling using a current source with a Zener diode or two forward biased diodes.
other two “basic” circuits as well, as this is more convenient than considering them as
negatively fed back common-emitter circuits. By introducing the specific properties of
the other semiconductor families ignored here, it is not too difficult to come to a grip
with any of the other stages not covered.
Choosing the bipolar junction transistor has the advantage that the other types
have, quite often, superior small-signal properties, so that for them the result can only
be better. So we are considering the following basic stages
– common-emitter circuit, providing the highest power gain and moderate input
and output impedance,
– common-base circuit (common-emitter circuit with maximum parallel–series
feedback) with current gain 𝑔𝑖 ≈ 1, being a typical current amplifier with low
input and high output impedance,
– common-collector circuit (common-emitter circuit with maximum series–parallel
feedback) with voltage gain 𝑔𝑣 ≈ 1, being a typical voltage amplifier with high
input and low output impedance.
Connecting (cascading) two stages requires more than just optimizing (small-signal)
transfer. A direct connection between two stages affects both the output operating
point of the first stage and the input operating point of the second stage (Figure 3.45).
A capacitor between the stages would effectively separate them as far as operating
3.5 Interfacing (cascading) | 183
a +V b +V
𝑅1 𝑅2 𝑅1 Q2
+ +
+ Q1 Q2 𝑣o + Q1 𝑅2 𝑣o
𝑣i 𝑣i
− − − −
c +V1 d +V1
𝑅
+
+V2 Q2 Q2 +V2
𝑣o +
+ Q1 + Q1 𝑅 𝑣o
𝑣i 𝑣i
− − − −
e +V f +V
𝑅1 Q2 𝑅1 𝑅2
+ +
+ Q1 𝑅2 𝑣o + Q1 Q2 𝑣o
𝑣i 𝑣i
− − − −
Fig. 3.46. Simple direct coupling of a common-emitter input stage with a bipolar junction transistor
in the output stage.
points are concerned. However, a high-pass filter is introduced into the signal path
hindering the passage of low frequencies, which will be detrimental in many applica-
tions. In particular,
Combining two stages it will hardly be necessary to isolate them from each other (i.e.,
impedance mismatch, Section 3.4). Therefore, making the output impedance of the
first stage equal to the input impedance of the second stage will only be desirable
for providing ample power gain. Most power is delivered by common-emitter circuits.
Thus, we will start with common-emitter circuits as input stage. In Figure 3.46, the six
possible combinations of two directly coupled stages are shown.
184 | 3 Dynamic behavior of networks (signal conditioning)
Both combinations of two common-emitter circuits ((a) and (b)) are useful. How-
ever, (a) limits the output operating point voltage of Q1 to 𝑉BE of Q2 which is a severe
limitation.
The circuit (c) is important enough to have a name: cascode circuit. It is a cas-
cade of a common-emitter circuit with a common base circuit. It overcomes the dis-
advantage of common-emitter circuits that the Miller effect (Section 2.4.3.2) increases
dynamically the input capacitance, the capacitance 𝐶BC between the base and the
collector by multiplication with a high factor according to 𝐶BC × (1 − 𝑔𝑣∗ ) limiting the
bandwidth. For a cascode circuit the voltage gain of the first stage 𝑔𝑣1 is
In both cases the internal emitter resistance 𝑟e (in Ω) is given by 𝑟e = 25 mV/𝐼E , i.e., in both cases
it is dependent on the operating current 𝐼E (which is given in mA to yield 𝑟e in ohms).
3.5 Interfacing (cascading) | 185
a +V b +V
𝑅2
+ Q1 + Q1 +
Q2 Q2
𝑣i + 𝑣i 𝑣o
𝑅1 𝑅1
𝑅2 𝑣o
− − − −
c +V1 d +V1
𝑅2
+ Q1 + + Q1 +V2
𝑣i Q2 𝑣o Q2
− − 𝑣i +
𝑅1 𝑅1
𝑅2 𝑣o
–V2 − −
e +V1 f +V1
𝑅 + Q1
+ 𝑣i +
+ Q1 Q2 𝑣o − Q2 𝑣o
𝑣i
− − −
𝐼 𝑅
–V2 –V2
Fig. 3.47. Simple direct coupling of a common collector input stage with a bipolar junction transistor
in the output stage.
Usually, a long-tailed pair is symmetrically biased so that the input impedance of the
second stage equals the output impedance of the first stage. Thus, the input voltage
of the second stage will be one-half of the input voltage considering that the emitter
follower has an unloaded gain of one. Together with the transadmittance of the second
stage
𝐼C
𝑔m2 =
25 mV
186 | 3 Dynamic behavior of networks (signal conditioning)
a +V1 b +V1
𝑅
+
+V2 Q2 Q2 +V2
𝑣o +
+V3 Q1 +V3 Q1
+ + 𝑅 𝑣o
𝑣i 𝑣i
− − − −
c +V1 d +V1
𝑅1
𝑅2 𝑅1
+ +V2
Q2 Q2
+V3 Q1 𝑣o +V3 Q1 +
+ +V2 + 𝑅2 𝑣o
𝑣i 𝑣i
− − − −
e +V1 f +V1
𝑅2
𝑅1 𝑅1
+
Q2 Q2
+V2 Q1 + +V2 Q1 𝑣o
+ 𝑅2 𝑣o +
𝑣i 𝑣i
− − − −
Fig. 3.48. Simple two-stage amplifiers with a common base circuit as the input stage.
3.68. When combining two stages of the same kind is it possible to get negative volt-
age gain?
3.69. Which cascade of two basic transistor circuits gives maximum voltage gain?
whatever comes to the mind of the circuit designer. This might require quite some
elaborate circuitry.
Problem
3.70. What is the simplest method of (passive) interfacing?
188 | 3 Dynamic behavior of networks (signal conditioning)
When electronic devices are interconnected in a system a new important aspect ap-
pears. The dimensions of such systems will (by far) exceed the centimeter-range. Con-
sidering that an electric signal that travels with the speed of light covers about 30 cm
in 1 ns makes clear that the space dependence of Maxwell’s equations must not be
disregarded when high frequencies (i.e., small time intervals) are involved.
For our purpose, we have to understand how an electromagnetic wave travels
along a connecting conductor. Such conductors form a transmission line. A transmis-
sion line is a pair of parallel conductors showing specific electric characteristics due
to distributed reactances along its length. It is designed to transmit electric signals
(alternating current) with frequencies so high that their wave nature must be taken
into account. An infinitely long line exhibits (input) impedance which is purely re-
sistive called characteristic (or natural) impedance 𝑍0 . It is totally different from the
resistance of the conductors themselves or from the leakage conductance of the di-
electric insulation between the two conductors. As a property of the ideal transmis-
sion line, the characteristic impedance is entirely a function of the capacitance and
inductance distributed homogeneously along the line’s length. The general expres-
sion for the characteristic impedance 𝑍0 of a real transmission line is, based on the
transmission line model (Figure 3.49b)
𝑅 + j𝜔𝐿
𝑍0 = √ (3.100)
𝐺 + j𝜔𝐶
where
𝑅is the specific resistance in Ω/m,
𝐿is the specific inductance in H/m,
𝐺 is the specific conductance in S/m,
𝐶 is the specific capacitance in F/m.
For an ideal (lossless) transmission line, 𝑅 and 𝐺 are both zero, so after cancelling
j𝜔 the equation for the characteristic impedance reduces to:
𝐿
𝑍0 = √ (3.101)
𝐶
Thus, 𝑍0 has no imaginary part, it is purely resistive and it is frequency-independent.
It is enlightening to model a transmission line by means of cascaded (infinitesi-
mal) lumped sections. Such a section is shown in Figure 3.49 together with a slightly
simplified circuit diagram, that combines the specific resistivities of both conductors.
(The specific resistivity of the inner conductor will be different from the outer, alone
for geometric reasons.)
The model section (Figure 3.49b) includes the specific resistance of both conduc-
tors, the specific inductance of both conductors which are coupled by their magnetic
3.5 Interfacing (cascading) | 189
a 𝑅1 d𝑥 b 𝑅 𝐿
𝐿 d𝑥{ 𝐶 d𝑥 𝐺 d𝑥 𝐺 𝐶
𝑅2 d𝑥
𝑥 d𝑥 𝑥 + d𝑥
Fig. 3.49. (a) Infinitesimal model section of a transmission line. (b) Simplified circuit of a transmis-
sion line model.
field. Both the resistance and the inductance are subject to the skin effect that in-
creases the resistance and reduces the inductance with frequency. Besides, both the
specific capacitance between the conductors and the specific conductance through
the insulator are included.
The transfer of high frequency signals is affected by the parasitic series induc-
tances and parallel capacitances of conductors even if their length is just a few cen-
timeters. Thus, even a short transmission line must be terminated with a resistor 𝑅
equal to 𝑍0 simulating an infinitely long line (and making the input impedance purely
resistive, namely 𝑍0 ). If a transmission line is not terminated with its characteristic
impedance, then its input impedance is complex and not resistive. Consequently, the
additional phase shift may be the cause of frequency instability (Section 3.6.3.2) in one
or both systems to be connected resulting in oscillations. Note that conducting loops
should be avoided because a changing magnetic field may induce noise in the system.
For low enough frequency transmission (when the wavelength of the highest frequency component
is longer than the transmission line length), the termination resistor may be omitted because in
this case the wave character of the signal can be disregarded.
The electromagnetic wave propagates in the space between the center conductor and
the outer shielding, independently of the grounding conditions of the cable. Usually,
190 | 3 Dynamic behavior of networks (signal conditioning)
the outer conductor is close to ground (earth) making the set up asymmetric with re-
gard to ground which is, however, without consequence for the signal transport.
The outer conductor, the shield of a coaxial cable, should either be connected to ground or have
low impedance toward ground so that its shielding property is most effective.
Long coaxial cables are predestined to form a ground loop. A ground loop is a conduc-
tor that connects two points that are supposedly at equal potential (usually earth) but,
actually, are at different potentials being the cause of current flow in the conductor.
These ground loops constitute a source of noise in the system. The usual remedy is
the so-called single-point ground, where all elements of an electrical system are con-
nected at one point (which is grounded). For coaxial cables, this means that the shield
is connected only at one end. By breaking the ground loop, the unwanted current is
prevented from flowing. As the shield may act as an antenna, the pick-up of radio
frequency is furthered by this practice. Galvanic isolation is another way to prevent
current flow in the shielding between sections of a system as no conduction occurs.
Care must be taken that galvanic isolation does not limit the bandwidth of the signal
path.
Efficient grounding is an art that takes a lot of experience. One proper single-point grounding is
definitely better than any number of arbitrary groundings.
Fig. 3.50. Field lines in the cross section of a coaxial cable (electric
field lines are shown dashed, magnetic field solid).
3.5 Interfacing (cascading) | 191
light. As the envelope of the wave propagates through the cable with this velocity,
it is a group velocity (and not a phase velocity). Figure 3.50 shows an example of
field lines in the cross section of a coaxial cable.
(b) As the conductors are parallel, the surfaces of constant propagation delay (or of
constant phase shift) are planes perpendicular to the conductors.
(c) In the planes of constant propagation delay (or constant phase shift) both fields
are constant in time. For that reason, the two conductors are oppositely charged.
(d) An infinitely long cable is purely resistive without frequency dependence. Low
loss insulation materials like polytetra-fluoroethylene (PTFE), polyethylene, or
polystyrol, with a dielectric constant 𝜖r ≈ 2.3 are used to produce coaxial cables
with a characteristic impedance of 50 Ω. Such 50-Ω cables have low signal loss,
and the low impedance allows the handling of higher frequencies. Compared to a
100-Ω cable the corner frequency of the unavoidable low-pass filter (Section 3.4.2,
formed by the termination resistor at the output of the cable and the input capac-
itance of the receiving device) is a factor of 2 higher. On the other hand, twice the
current must be supplied for the same voltage signal which can be demanding on
the signal source.
When transmitting data through a cable, the main interest is to compare what goes
in at one end with what comes out at the other. Aside from propagation delay and
characteristic impedance the following additional properties must be considered, e.g.,
Signal reflection: a coaxial cable does not only unavoidably delay, but it also reflects
a signal at its end. Obviously, a reflection from the end of an infinitely long cable
has no effect because it would never return. Terminating a cable with its charac-
teristic impedance simulates an infinitely long cable eliminating reflections, i.e.,
there is no discontinuity and the signal transmission is undisturbed. There are
two extreme cases,
– that of a short-circuited end, and
– that of an open-circuited end.
If there is a short-circuit at the end of the cable, one might wonder whether there
is any voltage signal at all at the cable input. There is (for a while) because how
should the input “know” of the short-circuit? The wave must travel to the end and
back to the input to provide this information, i.e., for twice the delay time 𝑡d of
the cable the input impedance 𝑍i of the cable is resistive with 𝑍0 . After that time
𝑍i = 0.
What happens at the end of the cable, at the short-circuited output? Obviously,
the output voltage 𝑣o stays zero all the time. To simplify matters, let us consider
the transmission of a step voltage signal. A step voltage of 2 V amplitude sup-
plied through a source impedance 𝑅S = 𝑍0 feeds a cable with a characteristic
impedance of 𝑍0 . Then a voltage step of 1 V will enter the input of the cable and
travel to the end of the cable. After twice the travel time 𝑡d the input impedance
of the cable will be zero and consequently the voltage at the input zero, as well.
192 | 3 Dynamic behavior of networks (signal conditioning)
𝑍o (𝑖o ) 𝑅 = 𝑍0 − 𝑍o (𝑖o )
𝑅 = 𝑍0
Fig. 3.51. Correct termination at the sending end (and the receiving end) of a coaxial cable with a
characteristic impedance of 𝑍0 .
Although the voltage source produced a step, the input voltage at the cable will
be a rectangular voltage signal of length 2 × 𝑡d , and the output voltage signal will
stay zero due to the short-circuit. How does continuity allow the latter? Until the
signal reaches the end of the cable, the cable must behave as if it were infinitely
long, i.e., the voltage step of 1 V must be transmitted to the short-circuit. As the
voltage is zero at the short-circuit, a voltage step of −1 V must be generated trav-
elling in the other direction so that the sum is zero. This is called reflection of the
voltage signal. On its way back, the −1 V-voltage wipes the +1 V-voltage step out,
i.e., more and more of the cable is short-circuited until the reflection reaches the
input making the input voltage zero. By then, all of the cable has the property of
a short-circuit.
For an open-circuit at the output, the situation is dual. In this case, we must con-
sider the current step signal of 1 V/𝑍0 that enters the cable. No current can flow
out of the output of the cable, i.e., the current is deleted by a current flowing in
the opposite direction. This reflected current wipes out the primary current, i.e., no
current flows which is equivalent with zero conductance. When this condition of
zero conductance has reached the input, the (non-existent) voltage division with
an open-circuit gives the full source voltage of 2 V at the input.
If the source impedance depends on the output current (like the output impe-
dance of an emitter follower, Section 3.5.1), an output stage with low output impe-
dance (e.g., a voltage follower) in series with a resistor could be used to provide an
amplitude-independent impedance of 𝑍0 (Figure 3.51), thus, avoiding reflections
due to impedance mismatch.
Any practicable coaxial cable will have connectors on either side. It must be stressed
that said connectors must have the same characteristic impedance as the cable itself
to avoid reflections from them and/or from the mating connectors mounted on the de-
vices. In addition, discontinuities in the geometric dimensions (diameters of the inner
and the outer conductor) must be avoided for achieving best results. For an undis-
turbed transmission of high frequencies, it is best to avoid connectors at all and/or
to avoid cascading different types of cables, even if they have the same characteristic
impedance.
3.5 Interfacing (cascading) | 193
𝑣o /𝑣i
1.0
0.9 126.2
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1 2𝜔
𝑡
0 10 20 30 𝑙2 𝑎2
Fig. 3.52. Example of an output voltage signal of a coaxial cable as a response to a voltage step
input signal.
When connecting dozens of subsystems with coaxial cables to form a system, it is prudent to use
only proven, and tested cables from one’s own stock to avoid needless cable failures.
Problems
3.71. Switching on of a DC voltage source of 2 V with 𝑅S = 50 Ω would give, in the
ideal case, a step signal of 2 V amplitude at its output. Such a source is connected to
a 50 Ω cable.
(a) What is the voltage at the input when the cable is infinitely long?
(b) What is the voltage at the input when the cable is correctly terminated?
(c) If a 10 m long 50-Ω cable, in which the signal travels with 2/3rd of the speed of
light, is short-circuited at the end, how long (with respect to the time of switching)
does it take for the input signal of 1 V to become 0 V? (Take for the speed of light
0.30 m/ns.)
(d) Give for the case (c) the time dependence of the voltage amplitude of the signal at
the input and the output.
(e) Give for the case (c) the time dependence of the amplitude of the current signal at
the input and the output.
3.72. A sinusoidal signal of 50 MHz is transmitted through a 100 m long coaxial cable,
having at this frequency a specific attenuation of 0.12 dB/m. How much of the voltage
amplitude gets lost?
is too high, it must be shunted by a resistor so that the combined impedance matches
that of the cable.
The dynamic response of active two-ports is the result of the frequency and time re-
sponse of the components inside the two-port taking the loading condition into ac-
count.
When we include the frequency dependence of the transfer function of a (linear) two-
port into power considerations, we must know the (power) gain at each frequency. Up
to now, we have come across the equation for the power 𝑝L dissipated in a load L to
be at any moment 𝑡
𝑝L (𝑡) = 𝑣L (𝑡) × 𝑖L (𝑡) . (3.102)
Transferring this equation to a sinusoidal steady-state signal with the angular fre-
quency 𝜔 we get
𝑝L (𝑡) = 𝑣̂ sin (𝜔𝑡 + 𝜑𝑣 ) × 𝑖 ̂ sin (𝜔𝑡 + 𝜑𝑖 ) . (3.103)
The addition theorem for cosines gives us the following two equations:
cos (𝛼 − 𝛽) = cos 𝛼 cos 𝛽 + sin 𝛼 sin 𝛽 (3.104)
cos (𝛼 + 𝛽) = cos 𝛼 cos 𝛽 − sin 𝛼 sin 𝛽 (3.105)
196 | 3 Dynamic behavior of networks (signal conditioning)
1
𝑝L (𝑡) = 𝑣̂ × 𝑖 ̂ × × [ cos (𝜑𝑣 − 𝜑𝑖 ) − cos (2𝜔𝑡 + 𝜑𝑣 + 𝜑𝑖 )]
2
𝑣̂ 𝑖̂
= × × [ cos 𝜑 − cos (2𝜔𝑡 + 2𝜑𝑣 − 𝜑)] (3.107)
√2 √2
with the phase shift 𝜑 between voltage and current and the root-mean-square values
of voltage 𝑣rms and current 𝑖rms :
𝜑 = 𝜑𝑣 − 𝜑𝑖 (3.108)
𝑣
𝑣rms = max (3.109)
√2
𝑖max
𝑖rms = (3.110)
√2
To get the mean power, one has to integrate over one cycle. The mean of any
cos(𝜔𝑡) over one cycle is zero because the positive lobe is a mirror image of the nega-
tive lobe. Thus, the second term inside the brackets vanishes and we get for the power
averaged over one cycle
𝑃 = 𝑣rms × 𝑖rms × cos 𝜑 . (3.111)
𝑃 is called real power and is measured in watts (W). The factor cos 𝜑 is called
power factor, and |𝑆| = 𝑣rms × 𝑖rms is called apparent power and is given in V A (volt-
ampere). If the load is purely resistive, the two electrical variables have identical wave-
form, the phase shift 𝜑 is zero and cos 𝜑 = 1. Real power is needed to provide energy.
If the load is purely reactive, then the voltage and current are out of phase by 90 de-
gree and cos 𝜑 is zero. This power is called reactive power 𝑄. It is zero when averaged
over one cycle. Power factors can be called leading or lagging depending on the sign
of the current phase angle with respect to voltage. Thus, the lag of an inductive load
may be (partly) compensated for by adding a capacitive load. Figure 3.53 summarizes
in a vector diagram how the various kinds of power are related.
Im[𝑆]
𝑆
𝑄 Fig. 3.53. Complex power 𝑆 is the vec-
tor sum of the real 𝑃 and the reactive
𝜑 power 𝑄. The apparent power is the
Re[𝑆]
𝑃 amplitude of the complex power.
3.6 Dynamic properties of active two-ports | 197
Real power is along the real axis. As reactive power does not transfer energy, it is
represented along the imaginary axis. This can also be expressed by using complex
numbers
𝑆 = 𝑃 + j𝑄 . (3.112)
Problems
3.75. What is the apparent power if the phase angle between voltage and current is
60° and the real power is 500 W?
3.76. A sine wave signal with amplitude of 10 V and a frequency of 𝑓 = 1 kHz is
dissipated in a resistor of 1 kΩ.
(a) What is the dissipated power?
(b) What is the dissipated power if it is a bipolar square wave signal (±10 V, duty cycle
50%) ?
(c) What is the dissipated power if it is a unipolar square wave signal (10 V, duty cycle
50%)?
(d) If in the case (c) the (average) dissipated power is just somewhat less than the
power rating of the resistor, is this still true for 𝑓 = 1 MHz or 𝑓 = 1 Hz?
Reactive elements cause frequency dependence in all circuit parameters. The major-
ity of operational amplifiers are voltage-dependent voltage sources. Such an ampli-
fier does not show any frequency dependence if it is ideal. It has infinite open-loop
voltage gain, infinite input impedances (zero input currents), infinite bandwidth, in-
finitely small slew rate, infinite bandwidth, and, zero output impedance. Modern in-
tegrated operational amplifiers approximate several of these ideal parameters closely
(see also Section 2.3.5.2). However, the following imperfections must be taken into ac-
count when confronted with real components.
Finite bandwidth all amplifiers have a finite bandwidth and consequently a finite
signal rise time.
Gain-bandwidth product Due to the finite bandwidth the voltage gain at DC does
not apply to higher frequencies. The gain of a typical operational amplifier is ap-
proximately inversely proportional to frequency, i.e., its gain-bandwidth product
is constant. This low-pass characteristic of an all purpose amplifier is introduced
deliberately because it stabilizes the circuit by introducing one dominant time
constant. Typical low cost amplifiers have a gain bandwidth product of a few MHz.
Present limits of gain-bandwidth products lie far beyond 1 GHz.
198 | 3 Dynamic behavior of networks (signal conditioning)
Slewing is present if the rise time of a signal increases with the height of said sig-
nal. The slew rate is usually specified in V/μs. Slewing is usually caused by charg-
ing internal capacitances of the amplifier with a limited (constant) current (Sec-
tion 3.2.1), in particular those capacitances used to effectuate its frequency com-
pensation.
Input capacitance most important for higher frequency operation because it reduces
the input impedance.
Noise all electronic components are subject to noise. These components will have 100
to 102 nV/√Hz noise performance.
Not all of these imperfections must be considered all of the time. Consequently, they
are dealt with whenever appropriate.
The general relations derived for static feedback in Section 2.4 hold to a first order
(see, e.g., Example 3.7) also in the dynamic case if the variables are either made de-
pendent on the time 𝑡 or the angular frequency 𝜔. This is in particular true for the
general feedback equation
𝐴(𝑡)
𝐴 F (𝑡) = (3.113)
1 − 𝐴(𝑡) × 𝐵(𝑡)
or using the complex notation to include frequency-dependent phase shift
𝐴(j𝜔)
𝐴 F (j𝜔) = (3.114)
1 − 𝐴(j𝜔) × 𝐵(j𝜔)
response of the amplitude of the transfer function of the feedback system becomes
1
𝑎F (𝜔) = −
𝑏
for angular frequencies up to 𝜔max . Up to 𝜔max the circuit with feedback has the inverse
property of the feedback circuit 𝐵. As 𝐵 is built from (linear) resistors with frequency
independent properties, then also the circuit with feedback will have a transfer func-
tion with a flat frequency response.
As the case of frequency independent negative feedback is of utmost importance,
we will investigate this situation some more. With
𝐴(j𝜔)
𝐴 F (j𝜔) = (3.116)
1 + 𝐴(j𝜔) × 𝑏
and
1
𝐴(j𝜔) = 𝐴(𝜔 = 0) × (3.117)
1 + j𝜔𝜏
we get
𝐴(𝜔 = 0)
1 + j𝜔𝜏
𝐴 F (j𝜔) =
𝐴(𝜔 = 0)
1+𝑏×
1 + j𝜔𝜏
𝐴(𝜔 = 0)
=
1 + j𝜔𝜏 + 𝑏 × 𝐴(𝜔 = 0)
𝐴(𝜔 = 0)
=
1 + 𝑏 × 𝐴(𝜔 = 0) + j𝜔𝜏
𝐴(𝜔 = 0) 1
= × 𝜏 . (3.118)
1 + 𝑏 × 𝐴(𝜔 = 0) 1 + j𝜔
1 + 𝑏 × 𝐴(𝜔 = 0)
Again, the complex transfer function with negative feedback has the shape of a simple
low-pass filter with an upper corner frequency 𝜔uF = 1+𝑏×𝐴(𝜔=0)𝜏
which is increased
by the (forward) return difference (1+ the closed-loop gain at low energies). The time
constant 𝜏F is likewise decreased by that factor.
In Figure 3.54, the frequency dependence of the amplitude term 𝑎(𝜔) of the am-
plifier is compared with that of the circuit with feedback 𝑎F (𝜔) that is reduced at low
frequency by the factor 1 + 𝑏 × 𝐴(𝜔 = 0).
The amplitude term is decreased by the factor 1 + 𝑏 × 𝐴(𝜔 = 0) and the frequency
𝜔u of the upper turnover point is increased by the same factor to 𝜔uF , i.e., the product
of the gain at lower frequencies with 𝜔u is a constant, i.e. it does not depend on the
closed-loop gain. This effect is called the constant gain-bandwidth product
𝐴 × 𝜔u = 𝐴 F × 𝜔uF (3.119)
200 | 3 Dynamic behavior of networks (signal conditioning)
log 𝐴
Taking the actual frequency responses, it would be obvious that above about
10𝜔uF the two frequency responses coincide, i.e., that above that frequency feedback
has practically no effect on the transfer function. As also the phase shift of a low-pass
filter does not change much at all beyond 10𝜔uF , it can be inferred that the frequency
responses of the complex transfer functions are very close (nearly identical) for the
two cases.
If the high-frequency response is identical, then the short-time response must be
identical, too. This means that at the beginning of a signal there is no effect of feed-
back on said signal, i.e., a very short signal does not “feel” the effect of feedback. This
can easily be visualized. The static concept of feedback presupposes that input and
output are in equilibrium. However, even if the input signal rises instantly, the time
constant of the amplifier makes the output signal rise with the rise time, i.e. it takes
the output signal some time which is longer than the input signal rise time to reach
the full value. The static feedback representation presumes that the full output signal
is fed back. Consequently, the feedback action is bound to be smaller, if the output
signal has not reached its full value. At the beginning of the input signal the output
signal has not developed at all; therefore, the feedback action increases from zero at
the very beginning of the input signal to its full values when the output signal with-
out feedback would have reached its full value. Of course, these deliberations do not
consider the propagation delay through the circuit that plays only a role for very short
(fast) signals.
Example 3.7 (Analytical proof that (negative) feedback has no impact on very short
signals). An amplifier without feedback be characterized by its gain 𝐴 and one time
constant 𝜏. Applying feedback, we get 𝐴 F = 𝐴/(1 + 𝐴𝐵) and 𝜏F = 𝜏/(1 + 𝐴𝐵). We
apply a step signal to the input. We must compare the amplitude of the output signals
at a time 𝑡 which is a fraction 𝑓 of 𝜏F , i.e., 𝑡 = 𝑓 × 𝜏F = 𝑓 × 𝜏/(1 + 𝐴𝐵). For the bare
amplifier the response 𝑎(𝑡) is
𝑡rs
𝑡rsF
+ + 1 kΩ
𝑔𝑣 +𝑔𝑣 +𝑔𝑣 +
𝑣i − 𝑅 𝑣o
𝐶 1 nF
− −
𝑅F1
𝑅F2
Fig. 3.55. Signal shapes in a three-stage amplifier with and without (dashed) series–parallel feed-
back.
Figure 3.55 sketches how the voltage signals in a three stage amplifier with series–
parallel feedback must look to account for the reduced rise time of the output signal
in the feedback situation.
In Figure 3.55 the signal shapes after each stage of a three-stage amplifier are
shown to explain how by feedback the effect of a low-pass filter after the second stage
(representing the limited bandwidth of the amplifier) is reduced. All three stages pro-
vide positive voltage gain; the first is a differential amplifier with its inverting input
used for negative feedback. Keep in mind that each of the amplifiers amplifies accord-
ing to its intrinsic gain which is independent of any feedback. Thus, the reduction of
the output signal by feedback action is achieved by reducing the effective input volt-
age of stage one by feeding the output signal to its second terminal which then is the
cause of reduced input voltages at the two other stages, too. The effect of the low-pass
filter after stage two requires an overshoot at the input of this filter to have at the output
202 | 3 Dynamic behavior of networks (signal conditioning)
of stage three a signal that resembles the input signal of stage one. This overshoot is
the result of the difference between the input signal and the attenuated output signal
fed back to the inverting input.
Example 3.8 (Slewing due to an excessive signal amplitude). The relevant data of the
circuit of Figure 3.55 are
– open-loop gain per stage𝑔𝑣𝑘 = 10
– closed-loop gain 𝑔𝑣F = 1 + 𝑅F1 /𝑅F2 = 10
– time constant 𝜏 = 𝑅 × 𝐶 = 1 μs
– rise time without feedback 𝑡rs = 2.2 μs
– rise time with feedback 𝑡rsF = 0.022 μs
As long as the signals with overshoot are transmitted undisturbed the output signal
has a rise time of 0.022 μs. By increasing the amplitude of the input signal, all inter-
nal signals will be increased accordingly. However, at a given amplitude the top of
the overshoot at the output of amplifier stage 2 will leave the dynamic range of the
amplifier, i.e., the upper part of the corrective signal will not be amplified any more.
Consequently the rise time of the output signal will increase. Raising the input signal
further will result in the loss of more and more of the overshoot increasing the rise
time further. When the flat portion of the signal is at the edge of the dynamic range of
stage two, no corrective overshoot will be transmitted any more, and the output signal
will have the rise time of the amplifier without feedback, namely 2.2 μs. An increase
of rise time with input signal amplitude is called slewing (Section 3.2.1).
feedback
𝐴(j𝜔)
𝐴 F (j𝜔) =
1 − 𝐴(j𝜔) × 𝐵(j𝜔)
𝐴(j𝜔)
= (3.121)
1 − [− 𝐴(j𝜔) × 𝐵(j𝜔)]
becomes that for positive feedback at a given frequency 𝜔0
𝐴(j𝜔0 )
𝐴 F (j𝜔0 ) =
1 − 𝐴(j𝜔0 ) × 𝐵(j𝜔0 )
𝐴(j𝜔0 )
= . (3.122)
1 − [+ 𝐴(j𝜔0 ) × 𝐵(j𝜔0 )]
Furthermore, the closed-loop gain is a real number at 𝜔0 because at a phase shift of
−180° the imaginary part is zero. If the three dominating time constants lie in the
active circuit A (e.g., if B is built from resistors and is, therefore, real by nature), then
above equation degenerates to
𝑎(𝜔0 )
𝑎F (𝜔0 ) = . (3.123)
1 − 𝑎(𝜔0 ) × 𝑏
Actually, at 𝜔0 the total phase shift of the closed-loop gain is 0°. Negative feedback
provides a minus sign corresponding to a 180° phase shift and the three low-pass fil-
ters shift the phase by −180°. What happens now if the closed-loop gain becomes +1
(or even larger)? Mathematically one expects that 𝑎F becomes infinite. Obviously, in
the real world such a thing cannot happen. The reason should be obvious. As is usual,
we used small-signal parameters, i.e., all transfer functions are supposed to be linear.
In particular, the variable 𝐴 decreases with amplitude, i.e., the transfer function is
not linear, at both ends of the dynamic range it becomes even zero. Thus, even if the
closed-loop gain is +1 (or even larger) there will be an amplitude at which it becomes
smaller than one, establishing a stable (temporary) operating point. In our problem,
this means that the feedback arrangement will oscillate with the angular frequency 𝜔0
with an amplitude at which the closed-loop gain is minutely smaller than one. With
an amplifier, this is called frequency instability which, obviously, must be avoided by
all means. To this end, there should be a phase margin of at least 50° at that frequency
at which the closed-loop gain gets +1 and a gain margin of at least −10 dB at a phase
shift of 180°. In narrow band applications, these margins may be lowered to 30° and
−6 dB.
The Nyquist criteria give the ultimate answer on the stability of a feedback circuit.
To this end the Nyquist plot, a parametric plot of the closed-loop transfer function, is
used for assessing the stability of a system with feedback. The real part of the trans-
fer function is plotted on the abscissa, the imaginary part on the ordinate using the
frequency as parameter. Or with polar coordinates, the amplitude of the transfer func-
tion is displayed as the radial coordinate, and its phase as the angular coordinate. To
204 | 3 Dynamic behavior of networks (signal conditioning)
1
𝜔=0
Im[𝐺(𝜔)] 0
−1
−2
𝜔
−3
measure the frequency dependence of the closed-loop transfer function the loop must
be opened at some point. As identical loading conditions as in the closed-loop situa-
tion must be reestablished, it is wise to open the loop at a point were loading is not
serious.
The Nyquist criteria differentiate between two cases.
– Instability of the first kind: Is the amplifier stable without feedback, so is the cir-
cuit with feedback also stable if the critical point (−1, 0) in the complex number
plane lies outside the Nyquist plot.
– Instability of the second kind: If the bare amplifier (without feedback) is unstable,
then there must be one counter clock-wise encirclement of the point (−1, 0) by the
Nyquist plot for each pole (i.e., a zero in the denominator) of 𝐴(j𝜔) to make the
amplifier with feedback stable.
What can be done if a system with negative feedback proves to be unstable according
to the first criterion? We need to tailor the closed-loop transfer function accordingly.
Obviously, there are two remedies. One has to change either the phase shift or the
amplitude (or both).
– The obvious choice is to attenuate the open-loop gain of the amplifier for all
frequencies so much that the closed-loop gain curve does not contain the point
(−1, 0) any more as shown in Figure 3.57c.
– Alternatively, a phase lag circuit may be inserted. Such a circuit is shown in Fig-
ure 3.58 together with its (asymptotic) Bode plots. To find the correct value of com-
ponent 𝑅1 it is necessary to account for the output impedance of the circuit (the
impedance of the Thevenin source) at the position where the phase lag circuit is
inserted. In Figure 3.59 its effect on the Nyquist plot is shown.
– Finally, a phase lead circuit (Figure 3.60) may be introduced into the loop reduc-
ing the phase shift so that a phase shift of −180° will occur at higher frequencies
where the amplitude is less than 1. A reduction of the amplitude at lower frequen-
cies is unavoidable. It may be compensated by increasing the gain by the same
amount without endangering the stability, as shown in Figure 3.61 for both the
Bode and the Nyquist plot.
3.6 Dynamic properties of active two-ports | 205
a log |𝐺(𝜔)|
𝜏1
c Im[𝐺(𝜔)]
𝜏2
𝜏1
b 𝜑(𝜔) 𝜏3
0° log 𝜔
−180°
Fig. 3.57. Frequency independent attenuation: (a) Bode plot of the frequency dependence of the
closed-loop gain (with three time constants). (b) Ditto of the phase shift. (c) Effect of reducing the
closed-loop gain on the Nyquist plot.
b log |𝐺(𝜔)|
1.0 log 𝜔
−20 dB/decade
𝑅1 𝑅2
a + + 𝑅1 + 𝑅2
𝐶 c 𝜑(𝜔)
𝑣i 𝑣o
𝑅2
0° log 𝜔
− −
−45°
−90°
Fig. 3.58. Phase lag circuit (a), and Bode plots for amplitude (b) and phase shift (c).
206 | 3 Dynamic behavior of networks (signal conditioning)
a log |𝐺(𝜔)|
c Im[𝐺(𝜔)]
b 𝜑(𝜔)
log 𝜔
−180°
Fig. 3.59. Effect of a phase lag circuit on the closed-loop transfer function: (a) frequency depen-
dence of the amplitude transfer function without and with phase lag circuit, (b) ditto for the phase
shift dependence, and (c) Nyquist diagram without and with phase lag circuit.
b log |𝐺(𝜔)|
1.0 log 𝜔
𝐶 +20 dB/decade
𝑅2
a + + 𝑅1 + 𝑅2
𝑅1
c 𝜑(𝜔)
𝑣i 𝑅2 𝑣o
90°
− −
45°
0° log 𝜔
Fig. 3.60. Phase lead circuit: (a) circuitry, (b) Bode plot of the frequency dependence of the ampli-
tude transfer function, and (c) ditto, but for the phase shift.
3.6 Dynamic properties of active two-ports | 207
a log |𝐺(𝜔)|
c Im[𝐺(𝜔)]
b 𝜑(𝜔)
log 𝜔
−180°
Fig. 3.61. Effect of a phase lead circuit with compensating amplification on closed-loop transfer
function: (a) Bode plot of the frequency dependence of the amplitude, (b) ditto for the phase shift,
(c) Nyquist plot, combining (a) and (b).
Voltage (emitter) followers have the largest closed-loop gain (𝐵 close to one). Thus,
they are prone to instabilities of the first kind. A remedy for emitter followers is the
insertion of a small resistor in series to the collector.
Problems
3.78. Give the dimension of the four characteristic transfer functions of the active el-
ement 𝐴, and of the circuit with feedback 𝐴 F . Namely, 𝑣o /𝑣i , 𝑖o /𝑖i , 𝑣o /𝑖i , and 𝑖o /𝑣i .
3.79. Calculate the complex voltage transfer function 𝐺𝑣 (j𝜔) of the phase-lag circuit
in Figure 3.58.
3.80. Calculate the complex voltage transfer function 𝐺𝑣 (j𝜔) of the phase-lead circuit
in Figure 3.60.
3.81. The frequency response of an amplifier is that of a simple low-pass filter with an
upper corner frequency of 𝑓u = 3.501 MHz and an open-loop voltage gain 𝑔𝑣 (0 Hz) =
10. Increase the gain to 100 without the help of additional active elements.
(a) How can this be done? Give details.
(b) What is the rise time of a step function processed by this new amplifier?
(c) Rectangular signals with a frequency of 50 kHz and a duty cycle of 0.5 (= 50%)
shall be amplified by this amplifier. Will the shape of the resulting output signal
resemble that of the input signal sufficiently?
208 | 3 Dynamic behavior of networks (signal conditioning)
Figure 3.62 shows how the output of an operational amplifier with series–parallel feed-
back is loaded by a resistive feedback network. The (ideal) voltage controlled voltage
source is loaded by the output resistance 𝑅o feeding the feedback resistor 𝑅F and the
complex impedance 𝑍1 at the input (𝑅i shunted by 𝑅1 and 𝐶i ). 𝑅i can be disregarded
if 𝑅i ≫ 𝑅1 . The output capacitance 𝐶o shunts 𝑅o . This is a case where switching to
a current source is helpful. Replacing the real voltage source by a real current source
(Norton’s theorem) shows at once that 𝐶o shunts 𝑅o . Thus, the feedback network in-
troduces two (additional) time constants, one at the input 𝜏i = 𝑅1 × 𝐶i , and one at the
output 𝜏o = 𝑅o × 𝐶o . The closed-loop voltage gain 𝑔𝑣F for low frequencies (at which
capacitances can be disregarded) is given by (disregarding 𝑅i)
𝑅1 + 𝑅F + 𝑅o 𝑅 𝑅
𝑔𝑣F (𝜔 = 0) ≈ 𝑔𝑣 × = 𝑔𝑣 × (1 + F + o ) . (3.124)
𝑅1 𝑅1 𝑅1
3.6 Dynamic properties of active two-ports | 209
+ +
𝑅o
𝑣i +
+ 𝑔𝑣 × 𝑣i
− − −
𝐶o 𝑣o
𝐶i
−
b c
+ +
𝑅o
+
− +
𝑣S 𝑔𝑣 ×𝑣i 𝑅F 𝐶F
𝐶F −
𝑣o
𝐶o
𝑅1
𝑅1 𝐶i
𝑅F
− −
Fig. 3.62. Effective capacitive loading of the voltage controlled voltage source of an operational
amplifier: (a) amplifier with stray capacitances, (b) series–parallel feedback with 𝑅F , 𝑅1 and 𝐶F , (c)
feedback network, redrawn.
a c
b
Fig. 3.63. Adjustment of the variable feedback capacitor to achieve frequency compensation: (a) 𝐶F
too small (b) 𝐶F too large (c) 𝐶F correct.
𝐶1
+ 𝑅1 𝐶cable +
𝑣x 𝑣i 𝐶i 𝑅i
− −
𝑅i 𝑣i
𝑣o = 𝑣i × = . (3.129)
𝑅i + 𝑅1 1 + 𝑅1
𝑅 i
Making 𝜏1 = 𝜏i gives for the value of the shunting capacitor 𝐶1 = 𝑅i × 𝐶i /𝑅1 . Again
a variable capacitor would be so adjusted that rectangular signals (Figure 3.63) would
be transmitted correctly. However, this is only a side effect. The main application is the
ten times reduced capacitive load. With 𝜏i = 𝑅i × 𝐶i and 𝜏1 = 𝑅1 × 𝐶1 the impedance
𝑍p of the probe is given by
𝑅i 𝑅1 𝑅 + 𝑅1
𝑧p = + = i (3.130)
1 + j𝜔𝜏i 1 + j𝜔𝜏1 1 + j𝜔𝜏p
𝑅i 𝐶i 𝐶i
𝐶p = 𝐶i × = = , (3.131)
(𝑅i + 𝑅1 ) 1 + 𝑅1 𝑣i /𝑣o
𝑅 i
i.e., the capacitive loading of the circuit by the measurement using an attenuating
probe is reduced by the attenuation factor.
The behavior of the probe in use is just that of a tenfold increased resistance
shunted by a tenfold reduced capacitance. As the main contribution to the capaci-
tive loading is the connecting cable, an active probe puts an (pre-)amplifier on the tip
of the probe. The capacitive loading by the probe is reduced to a few pF which can
be further reduced by about a factor of 3 by introducing attenuation also for the ac-
tive probe. Of course, the mechanical minimum capacity of about 1 pF will always be
present.
212 | 3 Dynamic behavior of networks (signal conditioning)
1 kΩ
+ +
𝑣S 1 nF 50 kΩ 20 pF 𝑣o
Fig. 3.65. Circuit of the probe
− − from Problem 3.86.
Problems
3.85. Attenuating probe.
(a) Give the values (resistance, capacity) of attenuating elements in an attenuating
probe. It should attenuate by a factor of 5. The impedance of the oscilloscope is
1 MΩ shunted by 30 pF, and the 1 m long cable has a specific capacity of 100 pF/m.
(b) If this probe is in use with the oscilloscope, how does it burden the circuit (which
resistance shunted by which capacitance)?
(c) In which cases is the use of attenuating probes the only deliverance?
For inverting operation amplifiers, we have found that the characteristic transfer qual-
ity is the transimpedance which in the ideal approximation is
Considering the finite voltage gain 𝑔𝑣 we get with 𝑟mF (j𝜔) = 𝑔𝑣 (j𝜔) × 𝑍iF (j𝜔), and
𝑍 (j𝜔)
with 𝑍iF (j𝜔) = 1+𝑔F (j𝜔) (Miller effect, Section 2.4.3.2)
𝑣
𝑍F (j𝜔) 1
𝑟mF (j𝜔) = − 𝑔𝑣 (j𝜔) × = −𝑍F (j𝜔) × . (3.133)
1 + 𝑔𝑣 (j𝜔) 1+
1
𝑔𝑣 (j𝜔)
The last part of this equation – with a high enough voltage gain of the operational
amplifier (throughout the frequency range of interest!) – hardly differs from the ideal
answer.
The factorization in the first part gives rise to the following interpretation: an am-
plifier with parallel–parallel feedback has the same transfer function as a voltage am-
plifier with a gain of −|𝑔𝑣 (j𝜔)| cascaded with an impedance that has the same value
as the dynamic impedance of the feedback impedance (or vice versa). These two ar-
rangements are compared in Figure 3.66.
3.6 Dynamic properties of active two-ports | 213
a 𝑍F b
𝑅S 𝑅S
+ − + −
𝑔𝑣 + 𝑔𝑣 +
+ 𝑍F +
𝑣S 𝑣o 𝑣S 𝑣o
1 + 𝑔𝑣
− − − −
Fig. 3.66. (a) An inverting operation amplifier with voltage source, and (b) a cascade of a voltage
divider and an amplifier without feedback that has the same transfer function.
1
𝑔𝑣S = − 𝑔𝑣 (j𝜔) × . (3.136)
𝑍S (j𝜔)
1+ × (1 + 𝑔𝑣 (j𝜔))
𝑍F (j𝜔)
As the small-signal transfer functions are identical what is then the difference be-
tween the operation amplifier arrangement and the cascade of a complex voltage di-
vider with an amplifier of the same voltage gain, i.e., what is the difference between
an active filter and a passive filter with amplification? There are three advantages of
active filters that can be singled out.
– 𝑍F can be larger by a factor of (1 + |𝑔𝑣 (𝜔 = 0)|) in the case of an active filter giving
the same frequency dependence. If this impedance is a capacitor, this fact can be
a huge advantage.
– Only the amplifier in the active filter case has feedback (by means of 𝑍F) contribut-
ing to linearity and stability of the system.
214 | 3 Dynamic behavior of networks (signal conditioning)
– The post-amplifier must have an upper corner frequency that is about ten times
higher than the upper corner frequency of the filter so that the phase shift intro-
duced by it is negligible, i.e., its bandwidth is ten times wider than for an active
filter. This would result, e.g., in additional (white) noise.
𝐶F
𝑅S
+ −
+
+
𝑣i
𝑣o
𝑅F 𝑣
log o
𝑣i
𝐶F
𝑅S
+ −
𝑔𝑣 + 𝑔𝑣
+
𝑣i 𝑣o
𝑅F
− − 𝑅S
1 log 𝜔
Fig. 3.68. Active low-pass filter with feedback re- Fig. 3.69. Bode plots of several cases of low-pass
sistor 𝑅F . filters: bare amplifier, feedback with basic 𝑅S 𝐶F
low-pass, feedback with added resistor 𝑅F , and
for the ideal low-pass.
action. Thus, the position of the quiescent operating point will reflect all changes in
the circuit (e.g., the input offset voltage) to the full extent. This can be remedied by
shunting the capacitor by a feedback resistor 𝑅F as shown in Figure 3.68. This resis-
tor stabilizes the transimpedance and consequently also the source-voltage gain. The
equation using finite 𝑔𝑣 -values
𝑣o 1
= − 𝑔𝑣 (j𝜔) × (3.139)
𝑣S 1
1 + (j𝜔𝐶F + ) × (1 + 𝑔𝑣 (j𝜔)) × 𝑅S
𝑅F
degenerates in the ideal case (1/𝑔𝑣 (j𝜔) = 0) to
𝑣o 𝑅 1
=− F × . (3.140)
𝑣S 𝑅S 1 + j𝜔𝑅F 𝐶F
At lower frequencies, the second term is irrelevant, so that the voltage gain is given by
the ratio of two resistors. For frequencies beyond the corner frequency 𝜔uF = 1/𝑅F 𝐶
the roll-off with 20 dB per decade reflects this low-pass filter property. Figure 3.69 com-
pares the Bode plots of the operational amplifier, of the ideal active low-pass filter, of
the simple low-pass filter and of a low-pass filter with an additional feedback resistor.
An important application of an inverting amplifier with parallel–parallel feed-
back by means of a capacitor is the charge sensitive amplifier as shown in Figure 3.70.
It is used in connection with detectors that respond to radiation of all kind by pro-
ducing free charge, e.g., photo-electrons. As the negative parallel–parallel feedback
effectively forces all the input current 𝑖i to flow into the capacity 𝐶F , the output volt-
age 𝑣o does not differ much from the voltage 𝑣𝐶 across 𝐶F (the inverting input of the
amplifier is at virtual ground)
𝑄 1
𝑣o ≈ 𝑣𝐶 = = × ∫ 𝑖𝐶 d𝑡 . (3.141)
𝐶F 𝐶F
216 | 3 Dynamic behavior of networks (signal conditioning)
𝐶F
− 1 kΩ
1 nF +
𝑖 + +
−
+
+ 1 kΩ 𝑣o
𝑣i 1 kΩ
𝑖S 𝑣o
+
5V −
− − −
Fig. 3.70. Principle of a charge sensitive Fig. 3.71. Circuit of Problem 3.87.
amplifier.
Although the principle is simple, much know-how is involved in the design of a prac-
tical charge-sensitive amplifier, in particular when a low-noise instrument is needed.
If 𝑖𝐶 is constant (e.g., provided by an invariable current source), one gets
𝑖𝐶
𝑣𝐶 (𝑡) = 𝑡. (3.142)
𝐶F
The formation of the ensuing voltage ramp is discussed in Section 3.2.1. This relation
is used in all circuits for the conversion of voltage amplitude to a time interval and
vice versa (Section 4.5).
Problem
3.87. A single rectangular pulse (5 V high, 0.1 ms long) is fed into the input of the
circuit of Figure 3.71. The amplifier has a dynamic range of ±10 V, the diode is assumed
to be ideal.
(a) What is the maximum 𝑣omax and what the minimum voltage 𝑣omin at the output?
(b) What is the quiescent output operating voltage?
𝑅F
𝐶S
+ −
+
+
𝑣i
𝑣o
Fig. 3.72. Principle of an active high-pass
− − filter.
3.6 Dynamic properties of active two-ports | 217
𝑅
with 𝜏F = 𝐶S × 1+|𝑔 (𝜔=0)|
F
the effective time constant 𝜏F takes into account the dynamic
𝑣
decrease of 𝑅F due to the Miller effect (Section 2.4.3.2).
With an ideal operational amplifier (with 1/𝑔𝑣 (𝜔) = 0), the voltage transfer from
source to output is given by
𝑣o
= −j𝜔𝑅F 𝐶S (3.144)
𝑣S
which means that it is proportional to the angular frequency 𝜔 for all 𝜔 values, i.e., its
increase with frequency is 20 dB per decade in the full frequency range. Such an ideal
filter has differentiating properties (Section 3.2.1). Thus, the output voltage is propor-
tional to the derivative of the voltage across the capacitor. For sinusoidal signals, the
minus sign, which stems from the negative gain of the operational amplifier, is tanta-
mount to a phase difference of 180° between the output and the input signal.
A real active high-pass filter has the combined frequency response of the opera-
tional amplifier with an upper corner frequency 𝜔u and that of a high-pass filter with
a lower corner frequency 𝜔l = 1/𝜏F . If the frequency response of the amplifier is ad-
equate (𝜔u ≫ 10𝜔l ), then its frequency response has no impact, and the total lower
frequency response is that of a high-pass filter with 𝜏F , multiplied by the low frequency
voltage gain of the amplifier −|𝑔𝑣 (𝜔 = 0)| up to about 0.1𝜔u . Actually, the active high-
pass filter is a band-pass filter with a bandwidth 𝐵𝑊 = 𝜔u − 𝜔l . Thus, to perform
properly up to higher frequencies an active high-pass filter needs an operational am-
plifier with a particularly high upper corner frequency 𝜔u , i.e., a wide bandwidth.
In Figure 3.73b, the frequency response of the amplitude transfer is shown for sev-
eral active high-pass filter arrangements. If, as shown in Figure 3.73b, 𝜔u is not high
enough, the frequency dependence of the filter does not intersect the flat portion of
the frequency response of the amplifier, but the negative slope of −20 dB/decade, i.e.,
at the corner frequency there is a change in the slope by 40 dB/decade corresponding
to two phase shifts of 90° each, indicating the danger of instability, because of posi-
tive feedback (Section 3.6.3.2). In addition, for angular frequencies above 𝜔l there is
no practical benefit of negative feedback action. To counteract this situation a resistor
𝑅S can be introduced (Figure 3.73a) resulting in a time constant 𝜏S = 𝑅S × 𝐶S . Under
ideal conditions (1/𝑔𝑣 (𝜔 = 0) = 0) we get
𝑣o 𝑅 1
=− F × 1
. (3.145)
𝑣S 𝑅S 1 + j𝜔𝜏
S
Above the angular frequency 𝜔IF = 1/𝜏S , the gain is −𝑅F /𝑅S , independent of 𝜔 until
the curve hits the roll-off of the amplifier curve.
218 | 3 Dynamic behavior of networks (signal conditioning)
𝑣
b log o
a 𝑣i
𝑅F
𝑔𝑣
𝑅S 𝐶S
+ −
+ 40 dB/decade
+
𝑣i 𝑣o 𝑅F
𝑅S
− −
1 log 𝜔
𝜔u 𝜔lF 𝜔uF
Fig. 3.73. (a) Circuit diagram of an active high-pass filter with added source resistor 𝑅S . (b) Fre-
quency response of the amplitude transfer of the amplifier and several active high-pass filter ar-
rangements.
𝑣
𝐶F
b log o
a 𝑣i
𝑅F
𝑔𝑣
𝑅S 𝐶S
+ −
+
+
𝑣i 𝑣o
− − 1 log 𝜔
1 1
=
𝜏S 𝜏F
Fig. 3.74. (a) Combined active high-pass low-pass filter (b) Frequency response of the amplitude
transfer.
Problems
3.88. How does feedback change the product of gain 𝐴(𝜔 = 0) times bandwidth 𝐵𝑊?
3.89. Name three advantages of active filters over passive filters (with appropriate am-
plification).
3.6 Dynamic properties of active two-ports | 219
A gyrator is a linear two-port that interrelates the current into one port of a two-port
with the voltage on the other port and vice versa. The output variables (across the
load) are connected to the input variables by
𝑣L = −𝑅 × 𝑖i , and (3.146)
𝑖L = −𝐺 × 𝑣i (3.147)
where 𝑅 is a conversion factor with the dimension of a resistance and 𝐺 = 1/𝑅. Thus, a
gyrator is selfdual, i.e., it is at the same time its dual counterpart, just like a resistor and
a switch. It inverts the current–voltage characteristic of an electronic component or
circuit. In the case of linear elements, the inversion of the characteristic results in the
inversion of the impedance. It does not only make a capacitor behave as an inductor,
but also a parallel LC circuit behave as a series CL circuit which is another feature
of duality. It is primarily used in replacing inductors which are bulky, expensive and
are in their electric properties much farther away from an ideal inductance than a
capacitor from an ideal capacitance.
In Figure 3.75, the realization of a gyrator by means of two negative impedance
converters (NICs, Section 2.5.4.1) is shown. The analysis of this circuit is best done in
three steps.
1. Let us call the impedance at the inverting input of NIC1 𝑍A . Then the input impe-
dance of the circuit is 𝑍i = −𝑍A .
2. Let us call the impedance at the inverting input of NIC2 𝑍B = −𝑅.
3. Let us introduce 𝑍C which is 𝑍B shunted by (𝑅 + 𝑍L )
𝑅
𝑍C = −(𝑅 + 𝑍L ) × .
𝑍L
𝑅1 𝑅1
𝑅
− −
𝑍L
+ +
𝑍i 𝑅1 𝑅 𝑅1
𝑅2
𝑍i = −𝑍A = . (3.148)
𝑍L
In case that 𝑍L = 1/j𝜔𝐶L the input impedance 𝑍i behaves as an inductance 𝐿 i =
j𝜔𝑅2 𝐶L . This inductance is not only based on a capacitor, but its value can easily be
adjusted by proper choice of the resistors 𝑅.
Problem
3.90. If the load of a gyrator is a real inductor with series resistance and parallel ca-
pacitance, what stray properties of the capacitance produced by this gyrator are ex-
pected?
4 Time and frequency (oscillators)
Time 𝑡 always means time difference. Its unit is the second (s). Time is an analog vari-
able just as its inverse, the frequency 𝑓. Frequency is measured in units of hertz (Hz).
As the trigonometric functions require that the angle is given in radians, we use the
angular frequency 𝜔 = 2𝜋𝑓. Its unit is (s−1 ). As in everyday life, time has several faces
also in electronics. There is
– the absolute time which is actually relative to General Mean Time (GMT),
– the time difference within a signal (rise time, fall time, signal length, etc.)
– the time difference between signals (synchronization), and
– the minimum time interval that defines simultaneity.
Obviously, this boils down to two kinds of time differences (intervals), those within a
signal and those between two (or more) signals.
The question of time within a steady-state signal can be reduced to timing within
one cycle (period). For sinusoidal signals often the phase within this cycle is used in-
stead of the fraction of the cycle time. Thus, phase is another analog variable of a
signal.
When comparing the timing of two or more signals the question of simultaneity is
reduced to a question of coincidence. Two signals are said to be coincident if they occur
within a finite resolution time which is subject to the speed of the electronics involved.
Obviously, if signals are coincident, it does not mean that the events generating these
signals are simultaneous. However, simultaneity itself can never be definitely assured
because, for practical purposes, a time difference of zero does not exist (experimen-
tally).
Timing within one signal has been covered at several places. It is connected with the
following terms: rise time, fall time, propagation delay, signal length, phase shift, etc.
The establishment of time marks representing the instant of an event requires trig-
gers which are based on a relaxation oscillator as we will see in this chapter.
Time (intervals) can be measured (relatively) to the highest precision of all phys-
ical quantities as frequencies can be measured precisely by counting. The precision
with which time can be measured is better than 10−16 when using an ytterbium clock.
This number is so small that only a comparison can help. The (relative) uncertainty of
the age of the universe would be less than half a minute.
As time is a conjugate variable of the angular frequency 𝜔 (which is the basis
for the Fourier transform, Section 3.1.1) time and frequency belong together. Thus, as
a first step we will investigate how repetitive signals are produced by so-called fre-
quency generators. Even if harmonic oscillations supplying the base for Fourier anal-
222 | 4 Time and frequency (oscillators)
ysis are most important, we will start with rectangular oscillations because they are
less sophisticated.
Problems
4.1. Is the mains frequency of 60 Hz (or 50 Hz) an analog or a digital quantity?
In Section 2.5.4, we discussed stable positive feedback with a closed-loop gain of < 1.
Figure 4.1 shows an operational amplifier with positive series–parallel feedback. The
characteristic transfer parameter for such an operation amplifier is the voltage gain
𝑔𝑣F . From our general discussions on feedback, we get its relation to the voltage gain
of the amplifier 𝑔𝑣A as
𝑔𝑣A 𝑔𝑣A
𝑔𝑣F = = 𝑅1
. (4.1)
1 − 𝐴𝐵 1 − 𝑔𝑣A ×
𝑅1 +𝑅F
Figure 2.47a shows the linearized voltage transfer function for 𝑅1 = 0 (no feed-
𝑅 𝑅
back, 𝐴𝐵 = 0, dotted line), for 𝑅1 < |𝑔 F|−1 (for 𝐴𝐵 < 1, dashed line), for 𝑅1 = |𝑔 F|−1
𝑣A 𝑣A
(for 𝐴𝐵 = 1, full line), and for one example with 𝐴𝐵 > 1 (dashed–dotted curve). In
the last case there is hysteresis: When the input voltage 𝑣i reaches the upper thresh-
old voltage 𝑣thr,u the output voltage 𝑣o jumps from the low output value 𝑣oL to its high
value 𝑣oH . When lowering the input voltage below the lower threshold voltage 𝑣thr,l
which is lower than 𝑣thr,u the output voltage jumps back to 𝑣oL . Thus, we obtain a
transfer characteristic with a hysteresis. Usually the output voltage range is close to
the voltage range of the power supply so that 𝑣oL ≈ −𝑉− , and 𝑣oH ≈ +𝑉+ . Then the
width of the hysteresis 𝑣thr,u − 𝑣thr,l is approximately the following fraction 𝑓 of the
+𝑉
+ −
+
+
−𝑉
𝑣i
𝑅F 𝑣o
𝑅1
A hysteresis in the transfer function is the key for two well-defined output levels.
Quite obviously group (a) cannot be realized with a transformer in the feedback loop
as transformers do not transfer signals with 𝜔 = 0. Circuits of group (b) that apply pos-
+𝑉
𝑅1 𝑅C 𝑅C 𝑅1
+ 𝑍F +
(𝑣i ) Q1 Q2 𝑣o
− −
Fig. 4.2. All six types of
𝑅B 𝑅E 𝑅2
relaxation oscillators
can be realized with a
−𝑉 long-tailed pair circuit.
itive feedback using transformers are called blocking oscillators. Figure 4.2 presents a
long-tailed pair circuit that with appropriate circuitry can operate in any of these two
times three modes. By appropriate biasing (choice of 𝑅B ), the two stages are either
symmetrically (current) biased, or one of them works as C class amplifier. Making the
feedback impedance 𝑍F either resistive, or capacitive, or in the form of a series reso-
nant circuit (Section 3.4.5.1) the threefold variety is achieved.
Problem
4.5. What kind of (positive) feedback is active in the circuit of Figure 4.2?
When an amplifier is of class C, a gating signal is required to take the operating point
into the amplification region. Depending on the position of the quiescent operating
point (L or H) a positive or negative signal is required. Such a signal may be very short,
just the amplitude (the amount of charge) must be sufficient.
Problem
4.6. Does the gating signal of a class C amplifier have positive or negative polarity?
b 𝑣i
a
𝐶F 𝑡
𝑣oA
H
𝑅F
𝑅1 L
+ 𝑡
+ 𝑣oB
− +
𝑣ref + H
𝑣i 𝑣o
− − −
L
𝑡
Fig. 4.3. (a) Basic circuit of a Schmitt trigger (𝐶F is just for frequency compensation) (b) Time de-
pendence of some input voltage 𝑣i and the resulting output voltages of a comparator 𝑣oA (i.e., with
zero-width hysteresis) and a Schmitt trigger 𝑣oB (with a hysteresis).
nominal threshold voltage 𝑣𝑖thr . (See discussion in Section 4.1 regarding the width of
the hysteresis.)
Figure 4.3a shows the circuit of a Schmitt trigger using an integrated comparator
as an active element. The input signal is applied at the inverting input, the purely
resistive positive feedback goes to the noninverting input. The quiescent voltage at the
noninverting input (the common mode voltage) stems from the reference voltage 𝑣ref
which establishes the threshold voltage. A speed-up capacitor 𝐶F across the feedback
resistor 𝑅F speeds up the operation (reduces the rise-time) by increasing the closed-
loop gain at higher frequencies. It compensates for the effect of the input impedance
of the amplifier (Section 3.6.3.3).
In Figure 4.3b the response of the Schmitt trigger to some input voltage is sketched.
The middle line is for a dummy zero-width hysteresis, the other two lines indicate the
width of the hysteresis. This figure is self-explaining.
Clearly, the signal input voltage that may be applied to a comparator (Schmitt trig-
ger) must be within limits so that the circuit is not destroyed. Usually, a dynamic input
range from rail-to-rail (i.e., within the supply voltage(s)) can be expected. The same is
true for the reference input voltage.
The primary use of a comparator (and a Schmitt trigger) is as discriminator which
decides, whether an (analog) input signal is larger than a given value (the threshold
voltage) or not. This decision is clearly a binary decision resulting in an output sig-
nal of H or L. In this respect, the Schmitt trigger is a link between analog and digital
(binary) electronics.
An alternative use is as trigger (Section 4.4.1). In this case, the emphasis lies on
the other analog property of a signal, its time of occurrence.
4.1 Degenerated amplifier output at two levels (hysteresis) | 227
+𝑉
𝑅1 𝑅3 𝑅4
𝑅2 𝐶1
+ +
Q1 Q2
𝑣o1 𝑣o2
+
𝑣i
−
− −
Fig. 4.4. Traditional circuit of a monostable multivibrator with two npn transistors.
228 | 4 Time and frequency (oscillators)
a 𝑣i
H
L
𝑡
b 𝑣o1
H
forcing the input signal. If the returned signal is larger than the input signal, we have
the case of an unstable positive feedback (closed-loop gain > 1). The output signal of
Q1 switches to H terminating the positive feedback action due to zero gain (at 𝑖𝐶1 ≈ 0).
However, 𝐶1 got charged during the signal transmission to the difference between H
and L (see Section 3.3.2) so that with the collector of Q2 in the L state, 𝑣BE of Q1 is nega-
tive, keeping Q1 in the H state. Only after 𝐶1 has sufficiently discharged through 𝑅3 , the
operating point of Q1 will be back in the amplifier region. Therefore, feedback action
is again possible, this time for a positive signal at the base of Q1 , bringing the collector
down to L and that of Q2 up to H, i.e., the quiescent condition is reestablished. Thus,
the length of the transient state is determined by the time constant 𝑅3 𝐶1 (disregarding
the small output impedance of Q2 which is saturated).
In Figure 4.5 is shown what happens if during the discharge time of 𝐶1 a new neg-
ative input signal occurs. Obviously, the discharging process of the capacitor is termi-
nated, it becomes charged again, and the discharging starts anew. As can be seen from
Figure 4.5c, the output signal 𝑣o has not its standard length (shown in Figure 4.5b),
which is determined by the time constant. The time difference between the two pulses
is added to the standard length. Thus, the resolution time of standard monostable mul-
tivibrators is about the length of one standard output signal.
In Figure 4.2 the long-tailed pair version of a monostable multivibrator is shown
with 𝑍F = 𝐶F .
In Figure 4.6 a monostable multivibrator based on an operational amplifier is
shown. At the input there is a high pass that provides a short negative spike to trig-
ger the one shot.
To avoid the loss of signals during the resolution time of a standard monostable
multivibrator one can make the circuit retriggerable by forcing the discharge of the
timing capacitor each time an input signal arrives, e.g., by means of an electronic
switch shunting the capacitor. Thus, in a retriggerable monostable, an additional in-
put pulse received during the resolution time initiates a new signal of standard length.
4.1 Degenerated amplifier output at two levels (hysteresis) | 229
𝐶1 +𝑉1
+ −
+
+
𝑣i 𝑅1 −𝑉2 𝑣o
− 𝐶F −
−𝑉3
Symmetric relaxation circuits do not have a preferred quiescent operating point con-
dition. It is L or H or neither.
+𝑉
𝑅1 𝑅3 𝑅2 𝑅1
𝐶1 𝐶2
+ +
𝑣o1 𝑣o2
Q1 Q2
− −
Fig. 4.7. Traditional circuit of a symmetric astable multivibrator using npn transistors.
1 1 1
𝑓= = ≈ . (4.4)
𝑇 ln 2 × (𝑅2 𝐶1 + 𝑅3 𝐶2 ) 0.693 × (𝑅2 𝐶1 + 𝑅3 𝐶2 )
For a 50% duty cycle both time constants 𝜏 must be the same 𝜏 = 𝑅𝐶 so that the
frequency becomes
1 1 0.721
𝑓= = ≈ . (4.5)
𝑇 ln 2 × 2𝑅𝐶 𝑅𝐶
In Figure 4.8, the circuit diagram of a symmetric astable multivibrator based on an
operational amplifier is given. Aside from the obvious use as clock generators, astable
multivibrators easily synchronize with some reference frequency. A free-running
astable multivibrator accurately locks to a reference frequency which may be two to
10 times higher than its basic frequency, thus, dividing this reference frequency.
In Figure 4.2, the long-tailed pair version of an astable multivibrator is shown,
when a capacitor 𝐶F as feedback impedance 𝑍F is used and the current biasing of the
two stages is symmetric.
+𝑉1
−
+
+
−𝑉2
𝐶 𝑣o
𝑅2
𝑅1
Fig. 4.8. Circuit diagram of a sym-
− metric astable multivibrator based
on an operational amplifier.
4.1 Degenerated amplifier output at two levels (hysteresis) | 231
Problems
4.9. Give the duty factor of the output signal delivered by an astable multivibrator.
4.10. Under what condition does the duty factor of an astable multivibrator equal
0.5?
Example 4.1 (Elementary analysis of the transition in a flip–flop from L to H). To flip
the bistable multivibrator of Figure 4.9 into the X = H state a positive signal must be
applied to the S(et) input. The starting condition (X = L state) is that Q2 is saturated
and Q1 in the off state. Both active elements are class C amplifiers, i.e., not in the am-
plifier region. The open-loop gain is zero (very small) and therefore, the closed-loop
gain as well. The loop for a set signal goes from the base of Q1 to its collector then via
𝑅F2 to the base of Q2 . Then from its collector via 𝑅F1 back to the base of Q1 . Taking X as
the output (at the collector of Q2 ), we have a positive parallel–parallel feedback with
𝑅F1 as feedback element. As mentioned, speed-up capacitors (𝐶F1 and 𝐶F2 ) shunting
the feedback resistors compensate for the effect of the input capacitances of the tran-
sistors by frequency compensation (Section 3.6.3.3).
+𝑉
𝑅 𝑅
X 𝑅F2 𝑅F1 X
+ +
𝑣o2 𝑣o1
Q1 Q2
S R Fig. 4.9. Circuit of an SR
− − bistable multivibrator using
npn transistors.
232 | 4 Time and frequency (oscillators)
A positive base current of sufficient amplitude (e.g., from the H state of another
circuit) at the S input takes Q1 into the amplifier region producing an amplified nega-
tive voltage signal at its collector which gets to the base of Q2 . By the negative signal at
its base Q2 gets into the amplifier region and produces a positive signal at its collector.
Via the feedback resistor 𝑅F1 , this positive signal increases the original input signal
by parallel feedback so that Q1 is driven into saturation. This makes the input voltage
𝑣BE2 of Q2 so small that no collector current flows and the output is in the H state. As in
the beginning, both transistors are now class C amplifiers with negligible open-loop
gain, there is no feedback any more. The speed-up capacitors do not only make the
transition time shorter by increasing the amplitude of fed back high frequencies but
they must get discharged before the next transition occurs (see Section 3.3.2), intro-
ducing a minimum resolution time. Therefore, their values should not be larger than
frequency compensation (Section 3.6.3.3) requires.
In Figure 4.2 the long-tailed pair version of a bistable multivibrator is shown, when a
resistor 𝑅F as feedback impedance 𝑍F is used and the current biasing of the two stages
is symmetric.
Flip–flops are elementary circuits in binary electronics and will be covered in
Chapter 5.
Speed-up capacitors in the feedback branch decrease the rise time of the output signals due to
frequency compensation.
Problems
4.11. Identify the loop for a reset signal in the circuit of Figure 4.9.
4.12. Which input variable, current or voltage, does the positive feedback in Figure 4.9
increase?
As an open-loop gain not much greater than 1 suffices to yield a closed-loop gain ≥ 1,
a single-stage amplifier (with positive gain) is adequate. In the case of a common-
emitter circuit, the voltage gain is negative necessitating an inverting feedback circuit
to provide positive feedback. The only passive component that can provide signal in-
version is a transformer. Also in the case of a common-collector circuit a transformer is
necessary to boost the open-loop voltage gain beyond 1. However, a transformer can-
not transfer direct current, i.e., frequency components with 𝜔 = 0. Thus, neither an
equivalent to a bistable multivibrator nor a Schmitt trigger can be realized by such cir-
cuits. Traditionally, relaxation oscillators having a transformer in the feedback circuit
are called blocking oscillators. The minimal configuration of a free-running blocking
4.1 Degenerated amplifier output at two levels (hysteresis) | 233
oscillator requires only a few discrete electronic components namely a three termi-
nal element as amplifier, one or a few resistors for biasing it, and a transformer. The
name stems from the blocking action at the three-terminal element (it is driven into
the cut-off region, resulting in an H signal). A blocking oscillator is another binary cir-
cuit because it switches between H and L. The transformer is the vital component of
a blocking oscillator. A pulse transformer with excellent high-frequency transmission
is required to create rectangular pulses with fast rise and fall times.
Blocking oscillators used to be important circuits in TV monitors employing step-
up high voltage transformers.
Problem
4.13. Why do transformers with an iron core not qualify for use in relaxation oscilla-
tors?
+𝑉
D1 𝑁1
+
𝐶1
+ Q
D2
𝑣o
𝑣i Fig. 4.10. Monostable
𝑁2 𝑅1 𝑅E
blocking oscillator using
a common emitter bipolar
− − junction transistor.
234 | 4 Time and frequency (oscillators)
D2 and positive feedback reinforces the input signal so that Q gets fully opened result-
ing in the L state of the output. (The transformer’s “winding sense” with regard to the
magnetic flux is indicated by the heavy dot, i.e., with reference to this dot, voltage
changes go the same direction.) As the inductance discharges with the time constant
𝐿/𝑅 the primary (and the secondary) current decreases until the L state at the output
cannot be maintained anymore and the output voltage moves toward H. This positive
signal is transformed to a negative signal in the secondary windings reverse biasing
the diode and making 𝑣BE ≤ 0. Thus, Q is cut-off again resulting in an H output volt-
age. D1 protects Q against a too high collector voltage (Section 2.2.7.2). It connects the
output to 𝑉+ if the output voltage rises beyond 𝑉+ due to a voltage spike produced
by the inductance of the transformer as a result of a fast current change. The pulse
length 𝑇, i.e., the duration of the L state, is approximately given by
𝑛×𝑅 𝐿
𝑇 ≈ ln(1 + )× (4.6)
𝑟e + 𝑅E 𝑅
with 𝐿 the inductance of the transformer, 𝑛 the ratio of the number of secondary wind-
ings over that of the primary ones, 𝑅 the resistance over which 𝐿 discharges, i.e., the
sum of the parasitic resistance of the windings, the saturation resistance of Q, the in-
ternal emitter resistance 𝑟e , and the external emitter resistance 𝑅E .
Problems
4.14. Why is there no bistable blocking oscillator?
4.15. Why is there no blocking oscillator with the function of a Schmitt trigger?
+𝑉
𝑁2 𝑁1
𝑅 𝐶
+
Q 𝑣o
resistance of the secondary windings of the transformer. Any disturbance (noise) with
a frequency component for which the closed-loop gain is ≥ 1 pushes the operating
point from the amplifier region either in the saturation region L (if this disturbance
has a positive lobe) or into the cut-off region H (if the lobe is negative). From there on
the discharging and charging of the inductance makes the output flip from H to L and
vice versa, i.e., in this regard the circuit behaves like a symmetric astable multivibra-
tor.
Problem
4.16. Why are blocking oscillators predestined to deliver high voltage output sig-
nals?
The start/stop oscillator in Figure 4.12 uses feedback in a completely different way. The
start signal is recirculated from the output of a comparator (Section 2.3.4.2) or a binary
OR gate (Section 5.3.1.3) via a delay line (Section 3.5.3.1) to the second input generating
the next signal with a time distance determined by the propagation delay in the circuit
and the value of the delay in the delay line. A fine adjustment of the period length can
be achieved by the low-pass filter consisting of the variable resistor and the capacitor
(determining the rise time of the recirculated signal and, therefore, the trigger moment
of the comparator). Such an arrangement ensures that the pulse distance is constant
between all signals independent of their position in the pulse train. As only delay lines
for rather short delay times are practical, such oscillators are particularly well suited
for high-frequency oscillators up to 100 MHz and beyond.
Problem
4.17. What is the advantage of delay-line oscillators when a synchronous gated train
of equidistant pulses is required?
−
+
+ +
𝑣i
𝑣o
𝑍0
Fig. 4.12. Delay line oscillator using a
− −
comparator with fine adjustment of the
period.
236 | 4 Time and frequency (oscillators)
Table 4.2. Overview over types of (passive) feedback circuitry depending on the polarity of the am-
plification used for harmonic oscillators.
a b
+ + + +
𝑣i 𝑣o 𝑣i 𝑣o
− − − −
c d
+ + + +
𝑣i 𝑣o 𝑣i 𝑣o
− − − −
Fig. 4.13. The four candidates for voltage feedback circuitry providing at 𝜔0 a phase shift of 0°.
a b
𝑖i 𝑖i
𝑖o 𝑖o
Fig. 4.14. Current dividers providing only at one frequency, at 𝜔0 = 1/√𝜏1 × 𝜏2 a transfer function
with zero phase shift.
This final relation also applies when inductors rather than capacitors are used as re-
active components. Figure 4.13 summarizes the four elementary voltage dividers that
give for 𝜔0 zero phase shift. However, as can easily be seen, version (c) and (d) also
provide zero phase shift for 𝜔 = 0 (with maximum transmission!), so that only (a) and
(b) fulfill both conditions. In Figure 4.14, the dual counterparts of Figure 4.13a and b
are shown.
From the stability condition with 𝐴𝐵 = 1 at 𝜔0 we get
1 Re[𝑍1 ] + Re[𝑍2 ] Re[𝑍1 ]
𝐴(𝜔0 ) = = =1+ (4.13)
𝐵(𝜔0 ) Re[𝑍2 ] Re[𝑍2 ]
so that with
Re[𝑍1 ] = 𝑅1 ,
𝑅2
Re[𝑍2 ] = , and
1 + 𝜔02 𝜏22
1
𝜔02 =
𝜏1 𝜏2
4.2 Harmonic feedback oscillators | 239
we get
𝑅1 𝐶2
𝐴(𝜔0 ) = 1 + + . (4.14)
𝑅2 𝐶1
How must 𝑅2 be chosen if 𝑅1 and 𝜔0 is given so that 𝐴(𝜔0 ) is minimal? The answer
to this question is that 𝜏1 must equal 𝜏2 , making
1
𝜔0 = and
𝜏
2 × 𝑅1
𝐴(𝜔0 ) = 1 + .
𝑅2
For 𝑅2 = 𝑅1 the closed-loop gain becomes 𝐴(𝜔0 ) = 3.
In Figure 4.15, the frequency dependence of the amplitude
3
𝑎(𝜔)𝑏(𝜔) = (4.15)
1
√7 + 𝜔2 𝜏2 + 2 2
𝜔𝜏
and of the phase shift
1
𝜔𝜏
− 𝜔𝜏
𝜑(𝜔) = arctan (4.16)
3
a
1
amplitude
0.1
0°
−30°
−60°
−90°
angular frequency 𝜔
Fig. 4.15. Frequency dependence of the closed-loop transfer function of a Wien-bridge type oscilla-
tor: (a) amplitude (b) phase shift.
240 | 4 Time and frequency (oscillators)
𝑅1
−
+
+
𝑣o
𝑅2
of the closed-loop gain is shown. As can be seen, the amplitude peaks at 𝜔0 so that all
other frequencies have less amplification.
Of the two times two possible feedback configurations, only one became popu-
lar, namely the voltage division using capacitors. These oscillators are called Wien’s
bridge oscillators. Why this name includes bridge will be clear at once. For conve-
nience, such oscillators are built with equal pairs of resistors and capacitors. By me-
chanically coupling variable capacitors, it is then possible to continuously vary the
frequency 𝜔0 .
Figure 4.16 gives a schematic circuit diagram of a Wien’s bridge oscillator. Both
the negative and the positive feedback circuits are voltage dividers with the divided
voltages fed into a differential amplifier. Thus, the name bridge. The branch of the
positive feedback provides the selectivity for 𝜔0 fulfilling the frequency condition. The
negative feedback circuit is there to fulfill the stability condition 𝐴𝐵 = 1.
How is it possible to fully fulfill the stability condition? For a linear system (in the
mathematical sense) this would not be possible. Even if the system was absolutely sta-
ble, the “integer” value 1 could never be exactly dimensioned. Fortunately, the voltage
transfer function is only approximately linear and outside the amplifier range even
strongly nonlinear. So the slope (the differential voltage gain) changes with the in-
stantaneous operating point. It becomes even zero for very small and/or very large
excursions. Thus, at some output level the closed-loop gain will reach the value 1 if it
was larger than that in the quiescent operating point. The oscillator will oscillate with
exactly that output amplitude.
As we have seen in Section 2.4.1.3, negative feedback improves linearity so that
we encounter a rather linear transfer function, except for the two nonlinearities be-
yond the linear range (where feedback ceases). Without further provisions the instan-
taneous operating point, where the stability condition would be fulfilled, would lie
outside the linear range. This means that a strongly nonlinear part of the transfer
function is involved which would distort the signal. A soft approach to the stability
conditions would strongly reduce this distortion. This can be achieved either by us-
ing a resistor 𝑅1 with positive temperature coefficient or choosing for 𝑅2 , the second
resistor of the voltage divider, a resistor with negative temperature coefficient. With
4.2 Harmonic feedback oscillators | 241
increased output amplitude, the increased current heats the resistors of the voltage
divider. Their values are changed in such a way that the loop gain of the positive feed-
back (that for small-signals is just above 1) will be reduced dynamically to 1.
In a more recent method 𝑅1 is replaced by a dynamic impedance. An n-channel
JFET is used as a voltage-controlled resistance. Raising the gate voltage of the JFET
which operates in the ohmic region increases the drain–source resistance. When there
is no output signal, the gate voltage is zero and the drain–source resistance is at the
minimum. Under this condition, the closed-loop gain is greater than 1, starting an
oscillation that is accompanied by an output signal. Using the rectified output signal
to bias the JFET increases the drain–source resistance of the JFET to such a value that
the amplitude of the output voltage becomes stable (because the closed-loop gain has
reached the value 1).
Originally, a filament bulb (which has a resistance with a pronounced positive
temperature coefficient) was used for 𝑅1 so that one can see the light of this bulb in-
side the housing of traditional Wien’s bridge oscillators. Of course, the temperature
of the filament inside the lamp depends on the equilibrium between electric power
dissipated in it and radiation power emitted from it. Clearly, at so low frequencies of
the oscillation that the filament cools down within one period of the oscillation the
stability of the output amplitude is jeopardized. To have a stable equilibrium (i.e., a
stable resistance) also for low but not too low frequencies, the temperature should
not be too high, i.e., the current through the lamp should be rather small. Under this
kind of operating condition, the life of the lamp should be almost infinite. With these
precautions, an amplitude stability of 0.1% can easily be achieved even with a simple
two-stage amplifier. Using an operational amplifier that linearizes the transfer func-
tion very well a distortion of the sine wave of the order of 10−5 can be achieved. The
ultimate performance limit of this rather simple bridge configuration lies in the limited
common-mode rejection of the differential amplifier.
𝑣i − 𝑣o 𝑣 × (1 − 𝑔𝑣 )
𝑖i = 1
= i 1
(4.17)
𝑅s + j𝜔𝐶 𝑅s + j𝜔𝐶
s s
1
𝑖 (1 − 𝑔𝑣 ) × (𝑅s − j𝜔𝐶s
) 𝑔𝑣 − 1 1 1
𝑌i = i = = ×( + ). (4.18)
𝑣i 𝑅2s + 𝜔21𝐶2 1
1 + 𝜔2 𝜏2 −𝑅s j𝜔𝐶s 𝑅2s
s s
242 | 4 Time and frequency (oscillators)
+ 𝐶s
𝑅s b
+ +
𝐶 𝑅 𝑣i 𝐶 𝑅 𝑣i 𝑅p 𝐿p
− −
Fig. 4.17. Frequency-selective positive feedback (a) schematic circuit diagram (b) reduced circuit
diagram.
1
𝑅 × (1 + )
𝑅= 𝜔2 𝜏 2 (4.19)
𝑔𝑣 − 1
this resonant circuit is purely reactive, it is undamped. From this equation we get
1
𝑔𝑣 = 2 + . (4.20)
𝜔2 𝜏 2
The resonant frequency 𝜔r of an undamped resonant circuit is given by
1 𝑔𝑣 − 1 1
𝜔r = =√ = = 𝜔0 , (4.21)
√𝐿𝐶 1 𝜏
(1 + 2 2 ) × 𝐶2 𝑅2
𝜔𝜏
and the gain at 𝜔r becomes with 𝜔r = 1/𝜏
𝑔𝑣 (𝜔r ) = 3. (4.22)
Not surprisingly, the answers are identical with those of the general analysis. This
result supports the nomenclature which names amplifiers and elements with negative
impedance active elements.
Example 4.3 (Output impedance of a Wien’s bridge oscillator at 𝜔0). For small-signals,
i.e., under the assumption that linearity applies, it is clear from Figure 4.16 that the
total output impedance 𝑍oF (𝜔0 ) of the oscillator has three components, which lie in
4.2 Harmonic feedback oscillators | 243
parallel. Using this information and all the other diligently will allow to arrive at the
exact answer to this rather complicated task without much mathematical effort. We
know that
– the total output impedance consists of three individual impedances in parallel.
Thus, it will be wise to deal with its inverse value, the admittance, rather than
with the impedance;
– the voltage gain 𝑔𝑣 of the amplifier at 𝜔0 is exactly 3, i.e., the reverse voltage gain
is 1/3;
1
– 𝜔0 = and 𝜏 = 𝑅𝐶; and
𝜏
– the dynamic value of an impedance 𝑍 that connects the output with the input of
𝑍
a voltage amplifier is (Section 2.4.3.2).
1 − 𝑔1
𝑣
Thus, the impedance 𝑍F− of the negative feedback circuit is given by
𝑅1 𝑅1
𝑍F− = = , (4.23)
1 2/3
1−
𝑔𝑣
that of 𝑍F+ , the impedance of the positive feedback circuit, by
j j
𝑅s − 𝑅s × (1 − )
𝜔0 𝐶s 𝜔0 𝜏 𝑅 × (1 − j)
𝑍F+ = = = s . (4.24)
1 1 2/3
1− 1−
𝑔𝑣 𝑔𝑣
The capacitive contribution is expected because the voltage division with the complex
admittance at the noninverting input which has a capacitive component, too, must
result in phase shift zero at 𝜔0 .
The total impedance 𝑍oF is obtained by shunting the output impedance of the am-
plifier 𝑍oA with 𝑍F− and 𝑍F+ . Switching to the admittance 𝑌oF , we get the moderately
simple relation
Problems
4.19. With a complex voltage divider made from elements of the same nature in the
feedback circuit the closed-loop gain becomes real not only at the frequency 𝜔0 but
also at 𝜔 = 0. How would such a circuit influence the operating point?
244 | 4 Time and frequency (oscillators)
4.22. Is the oscillator frequency of a Wien’s bridge oscillator primarily selected by the
amplitude or by the phase shift?
4.2.1.2 LC oscillators
Returning to the simple frequency selective voltage dividers, we found in the previous
section that for the phase shift to be zero 𝑋1 𝑅2 = 𝑋2 𝑅1 must be true. This condition is
also met with purely reactive voltage division. If the reactances are of the same kind,
the phase shift of the divided signal is zero for all frequencies. As the amplitude trans-
fer is frequency independent so that no frequency gets selected, these cases must be
disregarded.
For reactances having different sign Figure 4.18 applies. There the stability con-
dition which requires voltage division to provide a closed-loop gain of 1, was antici-
pated.
The voltage divider of Figure 4.18 provides the positive feedback with
j𝑋2 1
𝐵(j𝜔) = = (4.27)
1 1 1
𝑗 (𝜔𝐿 1 − + 𝑋2 ) 1 − ×( − 𝜔𝐿 1 )
𝜔𝐶1 𝑋2 𝜔𝐶1
Again we get a transfer function that is real for all frequencies. However, the amplitude
is frequency dependent. Let us now investigate the two possibilities for 𝑋2
1
1. = −𝜔𝐶2 , and
𝑋2
1 1
2. = .
𝑋2 𝜔𝐿 2
𝐶2
𝐶 = 𝐶1 × (4.28)
𝐶1 + 𝐶2
𝐿 𝐶
𝑋2
Fig. 4.18. Reactive voltage divider with inductance,
capacitance, and a general reactive element.
4.2 Harmonic feedback oscillators | 245
and with
1
𝜔r = √ (4.29)
𝐿 1𝐶
the denominator in (4.27) becomes zero, i.e., 𝐴(j𝜔r ) may even be zero to provide the
necessary closed-loop gain.
In the second case, we introduce
𝐿 = 𝐿1 + 𝐿2 (4.30)
and with
1
𝜔r = √ (4.31)
𝐿𝐶1
the denominator of 𝐵(j𝜔) becomes zero, and again 𝐴(j𝜔r ) may even be zero to provide
the required closed-loop gain of 1 for a stable (undamped) oscillation.
Of course, we get the dual answer for current division. So we have arrived at the
ideal series resonant circuit, and the ideal parallel resonant circuit with capacitive
and inductive gain adjustments.
In two respects, these answers are unsatisfactory:
1. if an ideal resonant circuit could be realized it would not need amplification to
oscillate without damping, and
2. the losses in the circuit (stray resistance and conductance, the emission of elec-
tromagnetic radiation) need not be replenished by an active device.
The first factor takes care of the voltage reduction 𝑣Th /𝑣o . The complex transfer func-
tion of the simple complex voltage divider is given by
𝑍3
𝐵= (4.33)
𝑍3 + 𝑍4
so that by comparison one gets
𝑍oA
𝑍4 = 𝑍2 + × (𝑍1 + 𝑍2 + 𝑍3 ) (4.34)
𝑍1
246 | 4 Time and frequency (oscillators)
a 𝑍oA 𝑍2
+
+
b 𝑍Th = 𝑍oA ‖ 𝑍1 𝑍2
+
𝑣Th + 𝑣iA
𝑍3
−
c 𝑍4
+
Fig. 4.19. (a) Circuit diagram of a realistic LC frequency-selective positive feedback circuit. (b) Sim-
plified circuit by means of Thevenin’s theorem. (c) Equivalent simple complex voltage divider.
Thus, the stray resistance or conductance and the input resistance of the amplifier
must be disregarded. Re[𝑍4 ] = 0 means
𝑅1 𝑅2 𝑋 × (𝑋 + 𝑋 )
Re[𝑍4 ] = 𝑅2 + 𝑅oA × (1 + 2 2
+ 1 2 2 2 3 )=0. (4.36)
𝑅1 + 𝑋1 𝑅1 + 𝑋1
This gives the general frequency equation
𝑅2 + 𝑅oA
−𝑋1 × (𝑋2 + 𝑋3 ) = × (𝑅21 + 𝑋21 ) + 𝑅1 𝑅2 . (4.37)
𝑅oA
The right-hand side of this equation is positive, consequently not all of the 𝑋𝑘 can be
of the same kind.
If 𝑅1 and 𝑅2 can be disregarded, above equation degenerates to
𝑋1 + 𝑋2 + 𝑋3 = 0 . (4.38)
4.2 Harmonic feedback oscillators | 247
𝑍1 𝑍3
Above conditions make sure that the phase shift is zero. In addition, the closed-loop
gain must be 1 so that the voltage gain 𝑔𝑣 of the amplifier must be (for a reactive voltage
divider)
1 𝑋3 + 𝑋4 1 𝑅 × (𝑋2 + 𝑋3 ) − 𝑅2𝑋1
𝑔𝑣 = = = × (𝑋3 + 𝑋2 + 𝑅oA × 1 ) . (4.39)
𝐵 𝑋3 𝑋3 𝑅21 + 𝑋21
Insertion of the general frequency relation yields the stability condition
1 𝑅 + 𝑅2
𝑔𝑣 = − × [𝑅2 × (𝑅1 + 𝑅oA ) + oA × (𝑅21 + 𝑅1𝑅oA + 𝑋21 )] . (4.40)
𝑋1 𝑋3 𝑅oA
For 𝑔𝑣 > 0 we need 𝑋1 𝑋3 < 0, i.e., these reactances are of a different nature they
must be dual to each other.
With Re[𝑍1 ] = Re[𝑍2 ] = 0 this equation degenerates to
𝑋1 𝑋2 + 𝑋3 𝑋
𝑔𝑣 = − = =1+ 2 . (4.41)
𝑋3 𝑋3 𝑋3
From above equation, it is clear that 𝑋2 and 𝑋3 are of the same nature as for 𝐵 < 1 it
is necessary that 𝑔𝑣 > 1. Applying duality leads to Figure 4.20. The dual counterpart
to (4.38) is
𝑋−1 −1 −1
1 + 𝑋2 + 𝑋3 = 0 (4.42)
and to (4.41)
𝑋−1
1 𝑋−1
2 + 𝑋3
−1
𝑋−1
2
𝑔𝑖 = − = = 1 + . (4.43)
𝑋−1
3 𝑋 −1
3 𝑋−1
3
In Figure 4.21, real electronic components are used to accomplish the voltage di-
𝑍oA
𝑅s 𝐶1
𝑔𝑣A ×𝑣iA +
+
−
𝐿 𝐶2 𝑣iA
𝑅s 𝐿 𝐶2
In Figure 4.22 real electronic components are used to accomplish the current di-
vision in the feedback circuit. 𝑌1 is the admittance of an inductor (characterized by
its inductance 𝐿 and a parasitic series resistance 𝑅s ). Both 𝑌2 and 𝑌3 are the admit-
tances of high quality capacitors with negligible stray properties. From the frequency
equation we get
1 𝜔𝐿
= × 𝜔(𝐶1 + 𝐶2 ) (4.49)
𝑅2s + 𝜔2 𝐿2 𝑅2s + 𝜔2 𝐿2
from which with 𝐶 = 𝐶1 + 𝐶2 one arrives at the well-known relation for the resonant
frequency of a series resonant circuit
1
𝜔=√ . (4.50)
𝐿𝐶
4.2 Harmonic feedback oscillators | 249
Table 4.3. Some numerical values for the dependence of the impedance bandwidth 𝐵𝑊 on the qual-
ity factor 𝑄.
1 2𝜋
𝐵𝑊 ×
𝑄 𝜔r
0.001 0.001
0.010 0.010
0.100 0.098
0.300 0.278
0.400 0.360
0.500 0.438
the closed-loop (voltage) transfer function is the key to make the closed-loop gain 1
by having it slightly higher than 1 in the quiescent operating point so that, with in-
creasing amplitude, it is self-adjusting to 1. Obviously, the nonlinearity can occur in
the active device or in the feedback circuit. A very good solution was demonstrated in
Section 4.2.1.1 where the amplitude dependence of the gain was achieved dynamically
in a negative feedback loop. One advantage of such an arrangement is that negative
feedback provides a stable operation of the amplifier.
Aside from stable amplitude also stable frequency may be required. In the case of
a series resonant circuit this requires stable values of the inductance and the capaci-
tance, for a parallel resonant circuit also changes in the parasitic series resistance 𝑅s
must be avoided. The main reason for a frequency change lies in temperature depen-
dence.
There are (at least) two contributions to temperature dependence both of the in-
ductor and the capacitor. In both cases, the geometry changes because of thermal ex-
pansion; in addition there is a material-dependent component. The linear expansion
of the wire of a wire-wound inductor contributes about 2 × 10−5 /∘C (at 25 ∘C) to the
temperature coefficient. By burning silver wire into a ceramic substrate, this expan-
sion can be minimized. They have a temperature coefficient as low as 10−5 /∘C. The
internal flux in the inductor rises with temperature as the skin depth decreases with
increasing resistance of the wire. This effect contributes about as much or even more
to the temperature coefficient so that it adds up to roughly +5 × 10−5 /∘C.
Aside from inductors with air cores (without core dependent contribution), there
are inductors with ceramic (ferrite) cores, or with silicon steel cores. The relative per-
meability (i.e., relative to air) of useful magnetic core materials ranges from 10 to
10 000, or, more practical in the range of 100 to 1000. The inductance can be raised by
this factor using an appropriate core. The use of a ceramic core increases the temper-
ature coefficient by typically 9 × 10−5 /∘C. To comply with the need of miniaturization
there exist thin film chip inductors.
The temperature coefficient of capacitors is mainly a matter of the kind of dielec-
tric that is used. Ceramic capacitors with graded negative temperature coefficients
and with high stability and low losses are available for resonant circuit application.
By matching the negative coefficient with the positive coefficient of the inductor, the
temperature dependence of the resonant frequency can be minimized. As ceramic ca-
pacitors are quite noninductive compared to the other classes of capacitors, they are
well suited for high-frequency work. An alternative is a capacitor with an insulator of
polystyrene that has a typical temperature coefficient of −10−4 /∘C. Such a coefficient
is needed to yield stable tuned circuits compensating the positive 10−4 /∘C coefficient
of the inductor. It should be obvious that also the amplifier’s properties influence the
stability of the oscillation frequency. At higher frequencies changes in the phase shift
of the amplifier’s transfer function cannot be disregarded.
A superior capacitor uses mica for insulation. The parasitic inductance and
conductance are particularly small, as well as the temperature coefficient of about
4.2 Harmonic feedback oscillators | 251
+10−5 /∘C. It does not compensate for an inductor’s temperature coefficient but can be
very useful in special applications requiring high stability.
Problem
4.23. Show that the impedance of a series resonant circuit is zero at the resonant fre-
quency.
Colpitts oscillator
The frequency of a Colpitts oscillator (Figure 4.23a) is determined by a parallel reso-
nant circuit consisting of an inductor 𝐿 and two capacitors 𝐶1 and 𝐶2 in series per-
forming the voltage division. Thus, the oscillation frequency is approximately (disre-
garding the impedances of the active component) given by
1 1 1
𝜔r ≈ √ ×( + ) (4.54)
𝐿 𝐶1 𝐶2
Clapp oscillator
To maintain the stability condition (closed-loop gain equals 1), when varying the res-
onant frequency, both capacitors of a Colpitts circuit must be varied simultaneously
so that the amount of voltage division stays constant. This approach is not viable. De-
coupling the fulfilment of the phase criterion from that of the amplitude criterion is
done in the Clapp oscillator by introducing an additional capacitor 𝐶0 (Figure 4.24). 𝐶1
and 𝐶2 perform the voltage division, whereas 𝐶0 controls almost entirely the resonant
frequency if it has a small enough value
1 1 1 1
𝜔r ≈ √ ×( + + ) (4.55)
𝐿 𝐶0 𝐶1 𝐶2
252 | 4 Time and frequency (oscillators)
a b
+𝑉 𝐶2
𝐿
𝑅B1 𝑅E
𝐶1 𝐶1
𝐿 Q
𝐶 𝐶2
Q
𝑅B2 𝑅C A
𝐶1
𝐿
𝐶2
B
Fig. 4.23. Schematic of a Colpitts oscillator with a PNP transistor as active component: (a) circuit,
(b) equivalent circuit for small-signals, and (c) redrawn as feedback configuration with two-ports A
and B (without resistors in A).
Thus, tuning of 𝐶0 does not affect the closed-loop gain but acts predominantly on the
resonant frequency 𝜔r . Therefore, a Clapp circuit is often a better choice than a Colpitts
circuit when a variable frequency oscillator must be built.
Hartley oscillator
The frequency of a Hartley oscillator (Figure 4.25) is determined by a parallel resonant
circuit consisting of a capacitor 𝐶 and two inductors 𝐿 1 and 𝐿 2 in series performing
the voltage division. Considering the (magnetic) coupling factor 𝑘 between the two
coils the total effective inductance 𝐿 0 that determines the frequency of the oscillation
is
𝐿 0 = 𝐿 1 + 𝐿 2 + 𝑘√𝐿 1 𝐿 2 . (4.56)
Thus, the oscillation frequency is approximately (disregarding the impedances of the
active component) given by
1
𝜔r ≈ √ . (4.57)
𝐿 0𝐶
4.2 Harmonic feedback oscillators | 253
a +𝑉 b
𝑅D 𝑅F
𝑅D
Q
𝐶0 𝐶1 𝑅L
𝑅F
Q
𝑅L
𝐿 𝐶2
𝐶1 𝐶2
c
𝑅L
𝐶0 𝐿 A
𝐶1 𝐶0
𝐶2 𝐿
B
Fig. 4.24. Clapp oscillator based on an n-channel MOSFET as the active component: (a) circuit
(b) small-signal equivalent circuit (c) feedback configuration with two-ports.
a 𝐶 b 𝑔𝑣A = 𝑅2 /𝑅1
+
𝑅2 𝑔𝑣A
−
A
𝑅1 +𝑉
−
𝑔𝑣
+ 𝐿1
−𝑉
𝐿1 𝐿2 𝐶
𝐿2
B
Fig. 4.25. Principle of a Hartley oscillator based on an operational amplifier: (a) circuit (b) feedback
configuration with two-ports.
254 | 4 Time and frequency (oscillators)
a +𝑉 b
𝐿p 𝐶s 𝐿s 𝐿p
𝑅B 𝑅T
Q Q 𝑅T
𝑅B
𝐿s 𝐶s
𝑅E c
𝐶s 𝐿s 𝐿p
B
Fig. 4.26. Schematic of an Armstrong oscillator with an NPN transistor as an active component:
(a) circuit, (b) small-signal equivalent circuit, and (c) feedback configuration with two-ports.
Armstrong oscillator
This type of 𝐿𝐶 feedback oscillator uses a transformer in the feedback circuit, as
shown in Figure 4.26. As a transformer is symmetric with regard to input and output,
there are two options for placing the resonant circuit. The original Armstrong oscil-
lator had the resonant circuit at the input of the active component (a valve) with the
transformer providing the voltage division from output to input. The effective induc-
tance is primarily the inductance 𝐿 s of the secondary windings of the transformer so
that the resonant frequency is obtained as
1
𝜔r = √ . (4.58)
𝐿 s𝐶
The variant with the resonant circuit at the output of the active element is called
Meissner oscillator. In this case, the effective inductance is primarily the inductance 𝐿 p
of the primary windings of the transformer so that the resonant frequency is obtained
as
1
𝜔r = √ . (4.59)
𝐿 p𝐶
Due to their relative clumsiness and cost, transformers are usually avoided. For
that reason, Armstrong oscillators are not popular.
For analogous reasons, feedback oscillators with current division have found so
little interest that there are not even names for them. The reason is that of convenience:
4.2 Harmonic feedback oscillators | 255
output
C D D
(–) B
(–) G (–) G
− +
E S S
input
(+) (+) (+)
Fig. 4.27. Symbol of a general active three-terminal component vs. equivalent symbols of actual
components.
Amplifiers are still optimized to resemble ideal voltage amplifiers so that designing a
feedback circuit with current division would be awkward.
𝑋2
𝑔𝑣 = + 1. (4.60)
𝑋3
Table 4.5 lists the combinations of reactances for the three basic three terminal oscil-
lator circuits providing stable oscillation by positive feedback.
256 | 4 Time and frequency (oscillators)
a 𝑋2 b 𝑋2 c 𝑋2
+ −
+ 0 < 𝑔𝑣 < 1 −
𝑋3 𝑔𝑣 > 1 𝑋1 𝑋3 𝑋3 𝑔𝑣 < 0
− +
𝑋1 𝑋1
Fig. 4.28. Disregarding the position of the grounding point which has no impact on the oscillation,
there is no difference between the three LC feedback oscillators in the three basic circuit configura-
tions of active three-terminal components.
Figure 4.28 displays the three cases listed in Table 4.5 using purely reactive com-
ponents for an LC oscillator.
As demonstrated in the example, no specific basic circuit configuration of the ac-
tive three-terminal component can be identified in feedback LC oscillators. The reason
is quite obvious: there is no input (necessary) and consequently none of the terminals
are common to both input and output. Of course, this is not true if a transformer is
used in the feedback circuit. A transformer is a two-port forcing the active element to
have an input and an output. Consequently, for oscillators with transformer feedback
there exists the trinity of the basic circuits of three-terminal amplifiers.
Problem
4.24. Why, except in cases with transformer coupling, it does not make any difference
which basic circuit is chosen to explain a three-terminal oscillator?
Example 4.5 (Relation between quality factor (width of the resonance curve) and the
rise time of the oscillation amplitude). The quality factor is given by
1 1 1
= × , so that (4.61)
𝑄 𝜔r 𝜏1
𝑄
𝜏1 = (4.62)
𝜔r
with 𝜏1 = 𝐿/𝑅, and disregarding 𝜏2 which depends on the tiny conductance of the
capacitor. The first time constant governs the rise of the oscillation amplitude after
starting the positive feedback action.
In Section 4.2.1.2, we have shown that
1 2𝜋𝐵𝑊
≈ (4.63)
𝑄 𝜔r
so that we get
1
𝜏1 = . (4.64)
2𝜋𝐵𝑊
Some actual numbers measured at a feedback oscillator
𝜏1 = 30 s, and
𝑓 = 7.0 kHz , i.e. ,
𝜔 = 44 × 103 s−1 then
𝑄 = 30 × 44 × 103 ≈ 1 × 106 .
Such a high quality factor indicates that this oscillator is a quartz oscillator. It takes
this oscillator a very long time after start-up to build up the stable output signal.
When a crystal of quartz is deformed it gets electrically charged. On the other hand,
if one applies a voltage across a properly cut quartz it gets deformed. This property is
known as piezoelectricity. A solid plate of quartz has various mechanical resonances.
Depending on the cut of the plate, the plate will be stimulated to oscillate at one of
these resonances. Oscillating at a mechanical resonance frequency is energetically
most favorable so that electrical stimulation at the resonance frequency will require
the least power, i.e., the production of the piezoelectric charge (i.e., the gain) has its
maximum at this frequency. For electronic purposes, the response of a properly cut
and mounted quartz crystal can be described by a resonance circuit as shown in Fig-
ure 4.29b.
Actually, this circuit diagram can be interpreted in two ways:
– The quartz may be used in place of a series resonant circuit. In this case, 𝐶1 is
not part of the resonant circuit but just a capacitor shunting the series resonant
circuit. The resonant frequency is
1
𝜔rs = √ (4.65)
𝐿𝐶2
258 | 4 Time and frequency (oscillators)
c
a b
90° 106 Ω
𝜑(𝑍)
𝐶1 𝑅 |𝑍|
0° 104 Ω
𝐶2 𝐿
−90° 102 Ω
𝜔
Fig. 4.29. Quartz oscillator: (a) symbol of an electronic quartz, (b) equivalent electrical circuit, and
(c) amplitude and phase term of the impedance.
1 𝑅2
𝜔rp = √( − 2) (4.68)
𝐿𝐶 𝐿
and the quality factor
𝐶
𝑄 = 𝜔rp × with (4.69)
𝐺p
𝑅𝐶
𝐺p = . (4.70)
𝐿
Taking the ratio of the resonance frequencies one gets
2
𝜔rp 𝐶2 + 𝐶1
2
≈ >1 (4.71)
𝜔rs 𝐶1
which means that in Figure 4.29c the resonance at the higher frequency is that of
the parallel resonant circuit. This figure does not only show the frequency depen-
dence of the impedance but also that of the phase shift. At frequencies between
the two resonance frequencies the phase shift becomes 90°, i.e., at these frequen-
cies the quartz crystal is equivalent to an inductor. However, this is a high quality
4.2 Harmonic feedback oscillators | 259
Table 4.6. Orders of magnitude of the sizes of the elements describing the electrical performance of
a resonating quartz crystal.
Inductor 𝐿 101 H
Resistor 𝑅 101 Ω
Capacitor 𝐶1 101 pF
Capacitor 𝐶2 10−2 pF
inductance which cannot be realized by a coil. To get some feeling for the size
of the elements that are the electronic equivalent of a resonating quartz crystal
Table 4.6 gives an idea on the orders of magnitude involved.
𝑅1 𝑅3 𝑅5
+ +
𝑔𝑣 ×𝑣i 𝑋2 𝑋4 𝑋6 𝑣i
Fig. 4.30. Schematic of a feedback net-
− − work of a phase shift oscillator.
Each filter section may consist of a resistor 𝑅𝑘 and a reactance 𝑋𝑘+1 with 𝑘 =
1, 3, 5. By setting the imaginary part of the transfer function equal zero, we get the
frequency equation
𝑅1 × 𝑅3 × 𝑅5 𝑅 + 𝑅3 + 𝑅5 𝑅1 + 𝑅3 𝑅1
= 1 + + . (4.72)
𝑋2 × 𝑋4 × 𝑋6 𝑋6 𝑋4 𝑋2
From setting the closed-loop gain equal 1, the required amplifier gain for stable oscil-
lation is obtained as
𝑅1 × 𝑅3 × 𝑅5 𝑋 + 𝑋4 𝑋4 + 𝑋2 𝑋2
−𝑔𝑣 = ×( 6 + + )−1. (4.73)
𝑋2 × 𝑋4 × 𝑋6 𝑅5 𝑅3 𝑅1
Now let us specialize: all filter sections shall have identical time constants, i.e.,
𝑍1 𝑍3 𝑍5
= = , (4.74)
𝑍2 𝑍4 𝑍6
or with the constants 𝑐1 and 𝑐2 (positive real numbers)
𝑍4
𝑍3 = 𝑍1 × = 𝑐1 × 𝑍1 and
𝑍2
𝑍
𝑍5 = 𝑍1 × 6 = 𝑐2 × 𝑍1 .
𝑍2
Then we get for the frequency equation
𝑍1 2 𝑐2 + 𝑐1 + 𝑐2
( ) =3+ 1 ≥3 (4.75)
𝑍2 𝑐1 × 𝑐2
and for the stability equation
𝑍1 2 𝑍 2 1
−𝑔𝑣 = ( ) × [( 1 ) − ] − 1 ≥ 8 . (4.76)
𝑍2 𝑍2 𝑐2
If the filters are low-pass filters, then
𝑍1 2 𝐿 2
( ) = 𝑅 2 𝜔2 𝐶 2 = 𝜔 2 𝜏 2 = 𝜔 2 × ( ) (4.77)
𝑍2 𝑅
depending on whether the 𝑅𝐶 or the 𝐿𝑅 combination is used. Thus, the frequency
equation becomes
𝑐1 1 1
𝜔2 𝜏 2 = 3 + + + , (4.78)
𝑐2 𝑐1 𝑐2
4.2 Harmonic feedback oscillators | 261
1 𝑐 1 1 1
𝜔o = × √3 + 1 + + > × √3 (4.79)
𝜏 𝑐2 𝑐1 𝑐2 𝜏
1
−𝑔𝑣 = (𝜔𝜏)2 × [(𝜔𝜏)2 − ]−1. (4.80)
𝑐2
For high-pass filters the inverse reaction applies
𝑍2 2 𝐿2
( ) = 𝑅 2 𝜔2 𝐶 2 = 𝜔 2 𝜏 2 = 𝜔 2 2 (4.81)
𝑍1 𝑅
depending on whether the 𝐶𝑅 or the 𝑅𝐿 combination is used. Thus, the frequency
equation becomes
1 𝑐1 1 1
=3+ + + , (4.82)
𝜔2 𝜏2 𝑐2 𝑐1 𝑐2
and the oscillation frequency 𝜔o is
1 1 1 1
𝜔o = × < × (4.83)
𝜏 √3 + 𝑐1 + 1
+ 1 𝜏 √3
𝑐 2 𝑐1 𝑐2
1 1 1
−𝑔𝑣 = 2
×[ 2
− ]−1. (4.84)
(𝜔𝜏) (𝜔𝜏) 𝑐2
One finding is important: the oscillation frequency is in both cases inversely propor-
tional to the time constant, e.g., inversely proportional to the value of the capacitor.
It is convenient to choose 𝑐1 = 𝑐2 = 1. This is particularly helpful in the case
of a variable frequency oscillator because three identical variable capacitors can be
coupled mechanically so that any variation of them affects each capacitance by the
same amount. Using high-pass filters we get
1 1 1 1
𝜔o = × = × (4.85)
𝜏 √3 + 3 𝜏 √6
requiring for stable oscillation an amplifier gain 𝑔𝑣 of
𝑔𝑣 = 6 × (6 − 1) − 1 = 29 . (4.86)
In Figure 4.31 the frequency dependence of the amplitude of the closed-loop gain
29
|𝐴𝐵| = (4.87)
26 13 1
√1 + 𝜔2 𝜏2
+ 𝜔4 𝜏4
+ 𝜔6 𝜏6
262 | 4 Time and frequency (oscillators)
10
amplitude
0.1
1 0.1 1 1 1 10 1 100
√6 𝜏 √6 𝜏 √6 𝜏 √6 𝜏
b
90°
60°
30°
0°
phase shift
−30°
−60°
−90°
−120°
−150°
−180°
angular frequency 𝜔
Fig. 4.31. Frequency dependence of the closed-loop gain (a) amplitude |𝐴𝐵| (b) phase shift 𝜑(𝐴𝐵).
are displayed.
Example 4.6 (Phase shift oscillator using three equal unloaded filter sections). If the
loading of each filter section by the following one may be disregarded (i.e., if buffer
amplifiers, e.g., voltage followers, are used to isolate them), each filter section must
provide 60° phase shift so that 180° is achieved. Taking high-pass filters the amplitude
4.2 Harmonic feedback oscillators | 263
Problems
4.27. It was shown in Section 3.4.5.1 that a twin-T filter provides zero degree phase
shift also at 𝜔 = 0. Why does the oscillation not occur at 𝜔 = 0?
4.28. Which feedback branch determines the oscillation frequency of a twin-T oscil-
lator?
4.29. Is the oscillation frequency of a twin-T oscillator selected by the phase shift or
by the amplitude of the closed-loop transfer function?
264 | 4 Time and frequency (oscillators)
𝐶 𝐶
𝑅
2
−
+
+
𝑅 𝑅
𝑣o
𝑅2
2𝐶 𝑅1
Fig. 4.32. Schematic
− of a twin-T bridge
oscillator.
The phase criterion delivers the frequency equation which gives the frequency (-ies)
at which the phase shift of the closed-loop transfer function is zero (or a multiple of
360°). The magnitude criterion delivers the stability equation from which the gain of
the active device can be determined so that the closed-loop gain (at the oscillation
frequency) is exactly 1. These two criteria may be realized the following ways:
– both are realized in the positive feedback circuit (e.g., in the standard LC oscilla-
tors)
– the phase criterion is realized in the positive feedback circuit, the magnitude cri-
terion in the amplifier by negative feedback (e.g., in Wien’s bridge oscillator)
– the positive feedback circuit is frequency independent, the negative feedback se-
lects the oscillation frequency, (e.g., twin-T oscillator)
Further, the closed-loop transfer function may have the dimension of a voltage gain
or a current gain. If we proceeded with the voltage gain only, is this in the tradition
of circuit design. But, actually, for each voltage oriented circuit there is a replica in
the current world, i.e., there are twice as many circuits to be considered. This omis-
sion is not serious as the current versions can easily be constructed from their voltage
counterpart by application of duality.
4.2 Harmonic feedback oscillators | 265
There are two choices for the reactance of a simple filter section. Only filters using ca-
pacitors are being used because inductors are expensive, clumsy and have undesired
stray properties. Again, one-half of the possibilities remain idle.
As we have shown for one case (Section 4.2.1.1), the two methods of fulfilling the
phase criterion are closely related. The amplifier with a capacitor in the positive feed-
back circuit is a gyrator (Section 3.6.5) behaving like a (dynamic) inductor so that the
circuit can be explained as resonant circuit.
Adding up all variants we count at least 24 (considering that the phase-shift os-
cillator works both with high-pass and with low-pass filter sections). About one-third
of them have been realized in practical circuits.
As the frequency selection is often done by varying the capacitor 𝐶 on which the
frequency depends, one distinction is of how this dependence looks like. There are
two distinctly different dependences. Feedback oscillators based on 𝑅𝐶 filters have
1/𝐶 dependence, those based on 𝐿𝐶 resonant circuits √1𝐶 dependence.
All feedback oscillators must obey both the phase and the magnitude criterion.
Problem
4.30. Why are oscillators based on current loop gain not presented?
266 | 4 Time and frequency (oscillators)
As we have seen in Section 2.2.5 power gain can also be achieved by means of active
one-ports, i.e., with one-ports having a characteristic where a portion of it has nega-
tive impedance (admittance). On the other hand, we have shown in Section 2.5.4.1 that
negative impedances can be produced dynamically using amplifiers. Furthermore,
we have seen in Section 4.2.1.1 that an amplifier with positive feedback is electrically
equivalent to a circuit containing a negative impedance. Thus, it should be clear that
there is a substantial equality between a one-port having negative impedance and an
amplifier with positive feedback.
Therefore, positive feedback oscillators have their counterparts in oscillators us-
ing active one-ports. Figures4.33 and 4.34 juxtapose feedback oscillators to the corre-
sponding active one-port oscillators.
In Table 4.7, the instability conditions of active one-ports are listed (Section 2.5.4.1).
As always when dealing with small-signal behavior we must not forget the bias-
ing. The operating point must be set in the region with negative impedance, or it must
be moved there by an activating gate signal. Naturally, the choice of the biasing net-
𝑅 I
a b c
1
− 𝑖S −𝑅L
𝑍o
+
𝑍o
𝑅
−𝑅L
𝑣S +
−
𝑅L
V
Fig. 4.33. Elements with S-shaped i-v-characteristics. (a) A two-port that produces dynamically the
impedance −𝑅L at the inverting input. (b) A one-port (neon bulb) having a (negative) impedance of
−𝑅L . (c) Exemplary characteristic with a (negative) impedance of −𝑅L . To produce instability, the im-
pedance 𝑍o must be ≤ |𝑅L |.
𝑅 I
a b +V c
−𝑅S
− 𝑍o
+
𝑅S
𝑅
𝑣S +
−
𝑍o
V
Fig. 4.34. Elements with N-shaped i-v-characteristics. (a) A two-port produces dynamically the impe-
dance −𝑅S at the noninverting input, (b) A one-port (tunnel diode) having a (negative) impedance
of −𝑅S . (c) Exemplary characteristic with a (negative) impedance of −𝑅S . To produce instability, the
impedance 𝑍o must be ≥ |𝑅S |.
work must be such that the stability condition of the characteristic in question is met.
Remember: Elements with S-shaped i-v-characteristics are stable against open-circuit
(any horizontal load line intersects the 𝑖–𝑣-characteristic in one point, only) whereas
N-shaped ones are stable against short-circuit (any vertical load line intersects the 𝑖–
𝑣-characteristic in one point, only).
Problems
4.31. For two-port oscillators with a transformer in the feedback circuit there do not
exist one-port equivalents. Why is that so?
4.32. What kind of reactive element is required for a one-port with negative impe-
dance of the N-type to form an oscillator?
4.33. What kind of reactive element is required for a one-port with negative impe-
dance of the S-type to form an oscillator?
Harmonic oscillators with one-ports are based on resonant circuits (Section 3.4.6). The
instability condition is given only at the resonant frequency where the impedance
(resp. admittance) is particularly small. As they are not zero as would be the case for
ideal resonant circuits, they are the cause of the damping of the circuits prohibiting an
oscillation with constant amplitude. The resistive portions of the resonant circuit must
be counterbalanced by the negative impedance, provided, e.g., by the active one-port.
Then, one gets ideal behavior of the circuit with an oscillation of constant amplitude.
The damping resistor of a series resonant circuit lies in series to 𝐿 and 𝐶. Conse-
quently, an active one-port with an N-shaped i-v-characteristic must be put in series
to it so that its negative impedance matches the impedance of the resonant circuit in
268 | 4 Time and frequency (oscillators)
(absolute) magnitude. This undamps the resonant circuit which then oscillates with
constant amplitude.
Again, the nonlinearity of the characteristic makes such an oscillation with sta-
ble amplitude possible. As long as the negative impedance at the operating point is
larger than the impedance of the resonant circuit, the oscillator will increase its am-
plitude until the amplitude is reached at which the exact match of the two impedances
is given. Then the oscillator will oscillate with this amplitude.
For the parallel resonant circuit the situation is dual. When an active one-port
with an S-shaped i-v-characteristic shunts a parallel resonant circuit its damping con-
ductance must be compensated for by the negative conductance of the one-port. This
situation was shown in depth in Section 4.2.1.1.
Problem
4.34. Design a one-port oscillator equivalent to an Armstrong oscillator.
In one-port relaxation oscillators the instability condition is fulfilled for all frequen-
cies above a minimum frequency 𝑓min . Consequently, there are two groups, depending
on the value of that minimum frequency 𝑓min :
– 𝑓min = 0, for bistable multivibrators and Schmitt triggers, and
– 𝑓min > 0, for astable and monostable multivibrators.
When an (analog) signal occurs usually two pieces of information are of interest
– its amplitude, and
– its timing.
Additional information can sometimes be gained from its shape, e.g., its rise time.
With regard to signal height, discrimination against noise (i.e., unwanted signals)
is most common. Such circuits are called discriminator with its typical representa-
tives the Schmitt trigger (Section 4.1.2.1) and the comparator (Section 2.3.4.2). As the
information lies in the discriminator threshold (a voltage level), such an action is also
called level triggering. To get the full information on the signal height an analog-to-
digital-converter (Section 6.4.2) is needed.
In this section and the following sections, we concentrate on timing. As discussed
in the introduction to this chapter, absolute simultaneity cannot be determined. The
crucial property of time measurements is the resolution (or resolving) time. This is the
smallest time difference that can be captured (measured). Usually, the time of the sig-
nal is correlated in time with some event so that actually the instant of that event and
not that of the signal is of interest. In an ideal case, the instant of an event is recorded
as the step of a step signal. However, electrical step signals have finite rise time (Sec-
tion 3.3.3) so that the trigger moment depends on the (absolute) discriminator level
and the signal height. If the required time resolution is shorter than the rise time of
the signal, the discriminator level is set so low that it discriminates against noise, and
the effect of the pulse height is disregarded. In the case of binary signals which have
all the same pulse height, only such edge triggering is needed. In the case of signals
having different height (e.g., analog signals), there are special circuits available (Sec-
tions 4.4.1.2 and 4.4.1.3) that make the trigger instant quite independent of the pulse
height and rise time.
Obviously, frequency being the inverse of time must be analog, too, even if it is of-
ten given as an integer variable (e.g., “60 Hz”). Originally, frequency was only assigned
270 | 4 Time and frequency (oscillators)
Fully digital signals are digital with regard to amplitude and time.
Problems
4.37. Which electronic variable is more important time or frequency?
4.4.1 Trigger
As the same device serves two purposes, it does not surprise that the names discrim-
inator and trigger get intermixed. Thus, we have three types of triggers, all of them
called discriminators, namely
– leading-edge discriminator,
– zero-crossing discriminator, and
– constant-fraction discriminator.
The time determined by a trigger signal is not the instant at which the triggering event
occurred. Relative to the instant there will be some (propagation) delay and there will
be some time dispersion, i.e., the “delay” time will not be constant but vary (statisti-
cally) around a mean value.
Problem
4.40. What is the difference between a discriminator and a trigger?
𝑣i
𝑣A (𝑡)
𝑣B (𝑡)
𝑣C (𝑡)
𝑣thr
Fig. 4.35. Walk of the trigger
instant at a given discrimina-
𝑡
𝑡A 𝑡B 𝑡C tor threshold.
is something like timing noise. As the width of the time distribution of jitter can be
kept quite small (≪ 1 ns), it affects only high-speed applications. The cause of jitter is
mainly electronic noise of all kind. If a reduction of the amount of jitter is required in
a specific application, the cause of it (e.g., thermal noise, cross talk, electromagnetic
interference) must be (partially) eliminated.
Figure 4.35 illustrates how the trigger instant is moved depending in the rise time
of the input signal. There is not only the “geometric” effect that it takes longer for a sig-
nal with longer rise time to cross the threshold voltage 𝑣thr , but there is the additional
effect that the charging of the input capacity takes longer with a slow rising signal.
This effect is indicated by the shaded triangle (indicating equal areas) in the figure.
The dependence of the trigger instant on the rise time (and at a constant rise time on
the amplitude) is called walk.
If, in analog applications, walk must be essentially smaller than the rise time of
the triggering signals, one of the other two triggering methods should be used.
Problems
4.41. What is time jitter?
𝑣i
𝑡
𝑣
𝑣thr
𝑡
𝑣Sch
zero volts catches these moments of zero-crossing. This moment occurs (much) later
than the crossing of the leading edge. This delay might be disadvantageous.
The walk of the trigger instant is essentially reduced to the difference in time it
takes to charge the input capacitance (in Figure 4.35 the amount of charge needed is
symbolized by the shaded triangles).
Problem
4.43. What kind of signals are required for zero-crossing triggers?
𝑣i 𝑣i
a 𝑡 a 𝑡
𝑣 𝑣
𝑡d 𝑡d
𝑡 𝑡
b b
𝑣 𝑣
𝑡 𝑡
c c
Fig. 4.37. Pulse shapes in a true-constant- Fig. 4.38. Timing relations of the input signal ap-
fraction discriminator: (a) input signals; (b) mod- plied to an amplitude-to-rise time-compensated
ified input signals; (c) bipolar timing signals. (ARC) constant-fraction discriminator: (a) input
signals; (b) modified input signals; (c) bipolar
timing signals.
There are several ways to arrive at a constant fraction trigger. The true-constant-
fraction (TCF) trigger (Figure 4.37) triggers effectively at the same input-signal thresh-
old. To achieve that, 𝑡d must be at least somewhat longer than the rise time of a typical
signal. If there is a signal with longer rise time, the timing will be off as shown in Fig-
ure 4.37c.
To minimize the rise time dependence the delay 𝑡d must be reduced to less than
the minimum rise time of the input signals. Such a circuit is realized in the amplitude-
and-rise-time-compensated (ARC) constant fraction discriminator. Its timing response
is sketched in Figure 4.38.
In an ARC constant fraction discriminator the effective pulse height threshold at
which the discriminator triggers is inversely proportional to the rise time.
Constant-fraction discriminators are the best choice with regard to minimal timing
errors (walk, jitter) except for signals with a narrow range both in amplitude and rise
time. In these cases leading edge timing might provide better timing resolution.
counts became. A good deal of the background counts originated from cosmic rays
having high energy and, consequently, high signal amplitude. Signals with an am-
plitude outside the dynamic range of the amplifier system get distorted, i.e., clipped
at about the height of the power supply voltage. Therefore, signals of high energetic
events have a flat top. This unexpected signal form cannot be handled properly by a
constant fraction discriminator. Usually it will output multiple signals during the du-
ration of this clipped input signal. Even if there was only one event, many signals will
be recorded as lost which fully explained the observed mystery.
Problem
4.44. What happens when an out-of-range signal (i.e., a signal of a wrong shape) is
processed by a constant fraction discriminator?
Logic gates belong to analog electronics (just as the other circuits with binary output
we covered up to now) when used as coincidence circuits even if the name logic gates
suggests a binary nature. The reason for this possibly amazing statement is based on
the fact that this circuit is used in timing which is an analog process.
Simultaneity between two events as determined by the simultaneity of their sig-
nals can only be found out within some resolution time. Signals that occur within a
given resolution time are called coincident. If one kind of the signals must be delayed
to be coincident with those of the other kind, this is called a delayed coincidence.
In cases with time-correlation in the occurrence of two signals the time distances
give rise to a time (interval) spectrum that can be analyzed either directly or with
an analog-to-digital converter after being converted into a pulse height spectrum by
means of a time-to-pulse-height converter (Section 4.5.1).
A straightforward way of determining a coincidence between output signals of
two triggers is to sum the output signals (e.g., by means of a summing amplifier, Sec-
tion 2.5.1.1). An output signal, that has twice the height of an output signal stemming
from a single input, indicates that both signals occurred within the resolution time
which is the sum of the length of both signals. This overlap coincidence gets into trou-
ble if the trailing edge of one signal intersects the leading edge of the other. There
will be a short output signal with a height that depends on the amount of time over-
lap. Therefore, output signals must be shaped, e.g., by a monostable multivibrator
which shapes the output signals to signals of constant length and pulse-height H hav-
ing short rise-time.
If the only purpose of such a circuit is to find out whether two (or more) signals
at its input(s) are present at the same time, it is called an AND gate. In binary logic,
truth tables are powerful means to describe the relation between inputs and outputs of
logic elements (Section 5.3). Although we are dealing here with circuits having analog
4.4 Time and frequency as analog variables | 275
Table 4.9. Truth table of a coincidence circuit (an electronic two-input positive logic AND gate).
L L L
L H L
H L L
H H H
5V
𝑅0
D1 X
A
D2
B Fig. 4.39. Two-input AND gate in diode logic.
inputs and binary outputs truth tables are useful already in this context. A truth table
of a two-input AND gate is shown in Table 4.9. By its nature no information on the
analog property (timing) is contained in such a truth table. Obviously there will be an
output signal only then when the two input signals overlap in time. There are several
ways of realizing logic circuits. This will be covered in Chapter 5.
A simple realization of a two-input AND gate in diode logic (DL) is shown in Fig-
ure 4.39. The low level L is about 0 V, the high level H about 5 V. With one input or
both inputs at L (0 V) gives an output signal L, the voltage of a forward biased diode
(≈ 0.7 V). If both inputs are at H (5 V), the diodes are reverse biased, and the output
is connected to the 5-V supply voltage via the resistor 𝑅0 , i.e., it is at H.
Problems
4.45. Why is a coincidence circuit an analog circuit?
6V
10 kΩ
A Q1
10 kΩ
B Q2
4.7 kΩ
Fig. 4.40. Principle of a Bothe AND gate shown with bipo-
lar npn junction transistors.
the Bothe circuit, because of problems with the biasing. Figure 4.41 gives an example
how the Rossi version of a two-input AND gate using bipolar junction transistors can
be designed. Exactly speaking it is a positive logic NAND gate, i.e., an AND gate with
inversion of the output signal (see truth Table 4.10).
Problem
4.47. In which regard is the Rossi circuit dual to the Bothe circuit?
+𝑉
A B
10 kΩ 10 kΩ
Q1 Q2
X
4.7 kΩ 4.7 kΩ
Fig. 4.41. Principle of
2 kΩ a Rossi positive logic
NAND gate shown with
bipolar PNP junction
transistors.
L L H L
L H H L
H L H L
H H L H
4.4 Time and frequency as analog variables | 277
+𝑉
Q4 Q3
Q1
A
Q2
B
Fig. 4.42. Two-input positive logic NAND gate in
CMOS technology.
278 | 4 Time and frequency (oscillators)
L L L
L L or H L
H L H
H H L
Problem
4.49. Is an anticoincidence circuit an analog circuit, i.e., does it retain the timing in-
formation?
A linear gate resembles an anticoincidence circuit insofar as, again, the two inputs
are not equivalent. However, Table 4.11 does not really apply, as the output of a linear
gate is analog and not binary in amplitude. Therefore, there is one analog input, and
one binary gate input and the output signal has analog amplitude. Figure 4.43 shows
a circuit that transmits an analog signal only during the duration of a gate signal,
the so-called acquisition time. The operating point of a class C amplifier is moved by
+𝑉1
+
gate signal
0V Q2 Q3
𝑣o
+ Q1
𝑣i
−𝑉2
− −
the gate signal temporarily into the amplifier (class A) region so that the analog input
signal is properly amplified as long as the gate signal lasts. As so often a variation of
the long-tailed pair is used. Only when Q2 is cut-off by means of a negative signal the
analog signal at the input of Q1 gets transmitted to the output.
Problems
4.50. What is the difference in the output signal of a linear gate and a logic gate?
4.52. When active, the cascade of 𝑄1 and 𝑄3 in Figure 4.43 has a distinct name. What
is it?
4.4.3.1 Sampling
The principle of linear gates is also applied for time digitizing of continuous analog
signals. This operation is called sampling. In sampled signals, the analog amplitude
information is maintained whereas the timing information is digitized by sampling
with a frequency that is appropriate for the task. However, often it is more advanta-
geous first to digitize the amplitude before it is sampled. Digital music (conversion of a
sound wave to a sequence of discrete-time signals), digital telephone, digital cameras,
digital television, and the use of digital measuring instruments are well-known fields
which are based on sampling. The sampling frequency or sampling rate is as low as
16 kilosample/s for digital telephone applications and up to 40 gigasample/s in very
high-speed digital oscilloscopes.
An ideal sampler generates samples equivalent to the instantaneous value of a
continuous signal at the desired time instants spaced by the sampling interval. From
looking at the output, it cannot be told, how the input looked between the sampling
instants. If the input changed slowly compared to the sampling rate, then the value
of the signal between two sample moments was somewhere between the two sam-
pled values so that a reconstruction by means of interpolation is justified. However,
for a rapidly changing input signal this procedure is not correct because for too wide
sample intervals the reconstruction will fail, because of a misinterpretation within the
interpolation process. This failure is called aliasing (Section 6.1).
It specifies that reconstruction of any signal is only possible when the sampling fre-
quency is at least twice that of the highest frequency component of the signal being
sampled. The Nyquist frequency (half the sample rate) must exceed the highest fre-
quency component of the signal being sampled.
280 | 4 Time and frequency (oscillators)
Due to the finite time resolution, the amplitude of the sample is a time average over
the resolution width, rather than that of a sampling instant.
Problem
4.53. The human ear is receptive for sound of frequencies between about 16 Hz and
16 kHz, depending on age.
(a) What is the Nyquist frequency when sampling sound that is meant for human
ears?
(b) Hi-fi audio compact disks are based on a sampling rate of 44.1 ksamples/s. Is this
sampling rate sufficient to obey the Nyquist–Shannon sampling theorem?
𝑅S D
+
+
𝑣S 𝐶 𝑅p S 𝑣o
−
−
+24 V
6.2 kΩ 12 kΩ
+12 V
4.7 kΩ 150 Ω
2.2 μF 51 Ω Q4
Q3
Q6
+ Q1 Q2 +
𝑣i 𝑣o
𝐶 S
400 pF
− −
16 kΩ
Q5
input voltage gets smaller, the diode is reverse biased and the voltage across the ca-
pacitor stays constant (disregarding its discharge by leakage current). After the analog
information is not needed any more, the capacitor is discharged by closing the switch.
Such a circuit is effectively a pulse-lengthener. A low droop peak detector using a 1 nF
polystyrene capacitor with an unbelievable low droop of 1 mV/s is reported.
Let us use Figure 4.45 to explain in more detail, how a sample-and-hold circuit
works. Via Q1 of the long-tailed pair and Q3 the capacitor 𝐶 gets positively charged if
a positive input signal is applied. Q1 and Q2 of the long-tailed pair form a differential
amplifier. By feedback through the FET and Q4 , the momentary voltage at the capacitor
is fed to the input of Q2 so that the voltage across the capacitor will rise until there is
no voltage difference at the inputs of the differential amplifier. After discharging the
capacitor by the switch, the circuit is ready for the next signal. (The leakage current of
the diode compensates for the leakage current of Q3 ).
In monolithic FET-input integrating amplifiers, sample-and-hold circuits can be
found with internal capacitors of typically 10−10 F. A switch with very low leakage is
essential. These circuits have a typical droop rate of 1 V/s and aperture times from
282 | 4 Time and frequency (oscillators)
The purpose of analog delays is the preservation of analog information for future use.
Analog information of all kind (acoustic, optical, etc.) can be stored for long times by
magnetic or other nonelectronic media. Some of this storage is nonvolatile. However,
here we are disregarding nonelectronic methods.
If it is the matter of preserving the voltage value occurring at some instant, a
sample-and-hold circuit (previous section) will do the job. If one must preserve the
peak voltage value of a single signal, a peak detector (previous section) will do the
job. An interesting way of storing (delaying) sampled analog voltage signals in the
millisecond range is an analog shift register that moves from cell to cell the charge
of a CCD (charge-coupled device) which is proportional to the analog values. Such
circuits are available in very-large-scale integration (VLSI) technique. Analog charge
packets are shifted from one cell to another by clock pulses. Depending on the clock
rate and the number of cells the cell content is made available at a later time. This way
a sequence of sampled analog values can be stored for later use.
Storing the full time-dependence of an analog signal for a given time interval 𝑡d
means moving it on the time axis by 𝑡d , i.e., delaying it by that amount. For very long
delays, the only practical method is recording and reproduction using nonelectronic
devices. The only reasonable way of accomplishing analog delay by purely electronic
means is to take advantage of the propagation delay (Section 3.5.3).
Propagation of signals in transmission lines is somewhat slower than that of light
in vacuum (0.3 m/ns). Consequently, 75.0 m of a 50 Ω-coaxial cable (type RG-58) can
provide a delay of 383 ns. However, as we have learnt the transmission through coaxial
cables is lossy, in particular at the highest frequencies. Thus, only a restricted spec-
trum of frequencies can be “stored” resulting in a rise time of a voltage step signal of
about 6 ns after transmission through above cable.
4.5 Analog conversion of signal attributes | 283
Coaxial cables are useful for delays measured in nanoseconds, with an upper limit
of a few microseconds. Lumped delay lines consisting of LC-filter sections are rarely
used, partly, because of the unpopular inductors even if they extend the useful delay
range due to the smaller signal propagation velocity.
A coaxial cable, with a center conductor wound helically around an insulating
(preferable ferrite) core, can provide specific delays of 20 to 3000 ns/m. Not surpris-
ingly these delay lines have higher characteristic impedance (on the order of 1 kΩ)
and have rather bad higher frequency transmission properties. For a 1 μs delay, the
bandwidth is around 107 Hz, which restricts their use to not too fast applications.
Problem
4.55. Name electronic ways of storing analog amplitude information.
Problems
4.56. Why can it be beneficial to shorten pulses?
The conversion of any of the analog signal variables (amplitude, time, frequency) into
any other analog signal variable is a purely analog process even if it is done in prepa-
ration for digitizing. Therefore, we are dealing with it here without reference to digitiz-
ing which will be covered in Chapter 6. The substitution of one variable for the other
284 | 4 Time and frequency (oscillators)
is entirely natural and should always be employed when a better performance can be
expected without too much additional circuitry.
Problem
4.58. Does the conversion of one analog variable into another belong to the field of
analog or digital electronics?
If the time distance of signals is converted into amplitudes by the overlap method,
there are special requirements on the quality of these (rectangular) signals:
– their lengths must not only agree with the maximum time distance to be con-
verted, but
– they must be highly stable, as well.
If two such signals are fed into a two-input AND gate (Section 4.4.2), the length of the
(rectangular) output signal is that time span which these signals overlap. The max-
imum overlap occurs for coincident signals, the minimum at the edge of the range
given by twice the length of the input signal. By integration (with an active low-pass
filter, Section 3.6.4.1), the rectangular output signals are converted into ramp signals
with a height which is inversely proportional to the time distance of the two signals.
Although the linearity of the conversion is far from perfect, this method has its mer-
its when processing high counting rates because of its simplicity. Figure 4.46 shows a
simple circuit of an overlap time-to-amplitude converter.
+6 V
IN1
𝑅L 100 kΩ
+0.3 V +
Q1 Q2 Q3 𝑣o
IN2
−
𝑖B
−𝑉
Example 4.8 (Adjusting the full-scale (FS) range of a TAC to the need at hand). Accord-
ing to (4.91) the FS-range of a TAC determines the range of the time intervals that can
be converted. In a commercial TAC various FS-time intervals are offered by switching
the size of the conversion capacitor. If now the built-in FS-ranges are, e.g., 0.3 and
0.5 μs but 0.4 μs is needed, the second range could only be used at the loss of resolu-
tion. A simple remedy is to shunt the appropriate conversion capacitor with one that
has one-third of its value, effectively changing this FS-range from 0.3 to 0.4 μs. To have
as little temperature dependence as possible this shunting capacitor should be of the
mica type.
Problem
4.59. Which of the two signals of an overlap coincidence serves as start signal?
𝑣i −
𝑢𝐶 R
+
S X 𝑣o
𝑡
Start
Start
time is 𝑡dis (i.e., the time from the instant at which the current source is activated until
the voltage across the capacitor is zero). From 𝑡dis /𝑡ch = 𝑣i /(𝑅 × 𝑖S ) the conversion
constant 𝑡ch /(𝑖S × 𝑅) is obtained which is independent of 𝐶 and has as unit s/V. The
stability of this conversion factor depends on the stability of the current source, of the
resistor and of the length of the charging time. By using the same time base for 𝑡ch and
𝑡dis , any longer-term instability of the time base drops out.
Problem
4.62. Is it possible to determine whether a conversion of a voltage into a time interval
inside a black box is done directly or by a compound device?
𝑡dis 𝑖
= ch . (4.93)
𝑡ch 𝑖dis
Problem
4.63. Does the duty factor of a train of pulses increase by expansion of the pulse
length?
The frequency of relaxation oscillators (Section 4.2) depends mainly on the charg-
ing time of a capacitor 𝐶. Thus, there are two parameters that can be varied, the ca-
pacity and the (dis)charging current (R).
In both cases, a voltage applied to a varactor (=variable capacitor) diode that is
part of the circuit’s capacitance varies the frequency because a varactor diode has a
voltage dependent capacity.
Harmonic VCOs are usually more stable than the relaxation VCOs, whereas for
the latter the tuneable frequency range is usually wider. In addition, they offer the
option to vary the charging rate of the capacitor by means of a voltage controlled cur-
rent source which is particularly helpful at low frequencies. The most widely used har-
monic VCO circuits are based on the Colpitts oscillator (Section 4.2.1.3). The conversion
curve of simple VCOs will have a limited linear range only. To qualify as voltage-to-
frequency converter (VFC) a highly linear response over a wide range of input voltages
is substantial.
Aside from VCOs two basic VFC architectures are common, one based on a current-
steering multivibrator, the other uses the charge balanced method which exists in two
forms, the asynchronous and the clocked (synchronous) version. Such amplitude-to-
frequency conversion (with pulse-frequency modulation) was originally applied to
transmit analog signals, in particular by telemetry links (e.g., transmitted from remote
sensors). Because of the analog system noise superimposed on directly transmitted
analog signals, the quality of the transmitted pulse stream is superior. The analog sig-
nals are recovered with a low-pass filter after reshaping the received pulses. Even in
the presence of electronic noise it is possible to recover the leading and trailing edges
of the transmitted pulse stream thus rejecting the noise contribution. The accuracy
of the transmission process is reduced to the accuracy with which the analog input
signal can be recovered from the transmitted pulse stream.
The principle of the current-steering VCF is shown in Figure 4.48. A voltage fol-
lower (Section 2.5.2.1) with a source follower (Section 2.4.3.5) as output stage converts
via 𝑅S the voltage into current. This (drain) current 𝑖D negatively charges a capaci-
tor 𝐶. When the drain voltage in the form of a negative ramp drops below 𝑣ref , the
Schmitt trigger (Section 4.1.2.1) fires, switching the flip–flop (Section 4.1.3.2). The flip–
flop steers a changeover switch reversing the terminals of the capacitor 𝐶, i.e., the
charge of the capacitor reverses its sign with regard to 𝑖D . Consequently, 𝑖D continues
charging the capacitor negatively which means that due to reversed polarity the ca-
pacitor is being discharged. The voltage across the capacitor has triangular wave form
(𝑖𝐶 in Figure 4.48) because the change from charging to discharging occurs always at
the same amplitude. The next steering of the changeover switch finds an empty ca-
pacitor. Therefore, changing the polarity is without consequence for the value of the
drain voltage 𝑣D . The voltage 𝑣D at the input of the Schmitt trigger has the shape of a
saw tooth. It can be best explained by the subtraction of the triangular signal 𝑣𝐶 across
the capacitor from the power supply voltage and paying attention to the reversal of the
terminals of the capacitor, i.e., the voltage increase decreases this input voltage, ditto
the voltage decrease because of the reversed capacitor polarity. Such a VFC is simple in
4.5 Analog conversion of signal attributes | 289
a
+𝑉
+ −
𝑣𝐶
𝑣D
T X 𝑣o
𝑣i +
𝑣ref
−
b
𝑣o
𝑡
𝑣𝐶
𝑡
𝑣D
Fig. 4.48. Principle of the current-steering VFC, using a voltage follower, a Schmitt trigger, and a
toggle flip–flop.
its design, it provides good accuracy, and does not need much power. Consequently,
it is a favorite in telemetry applications.
The principle of a charge balance VFC, which is more demanding and has better
performance, is shown in Figure 4.49. The conversion capacitor is part of a charge-
sensitive amplifier (Section 3.6.4.1). If its output voltage 𝑣o1 passes the threshold value
𝑣ref , a Schmitt trigger (Section 4.1.2.1) fires and a monostable multivibrator delivers an
output signal 𝑣o of a precise length. During the duration of this signal, a precise cur-
rent source removes a fixed amount of charge from the conversion capacitor reducing
the output voltage of the amplifier 𝑣Ao . As the input current into the amplifier con-
tinues to flow, no input charge gets lost during this discharging process. Additional
charge at the input raises the output voltage of the amplifier again beyond the thresh-
290 | 4 Time and frequency (oscillators)
+ +
precision
𝑣i − 𝑣o
one-shot
−
𝑣ref
old repeating the process described above. The time difference between the responses
of the Schmitt trigger (and one-shot multivibrator) depends on the magnitude of the
current flowing into the input, i.e., on the input voltage 𝑣i . The output pulse rate is pro-
portional to the rate at which the removed charge is replenished by the input signal.
The conversion quality of such a circuit is very good, it depends on the stability
both of the current source and the length of the output signal of the one-shot (its timing
capacitor), and on charge losses in the conversion capacitor (not on its value and its
stability).
The same principle is used in the clocked VFC except for the one-shot that is re-
placed by a D flip–flop that is synchronized to a clock signal. Synchronization provides
easier handling of the output data transfer in particular when there are signals from
several channels.
Although the circuitry differs only slightly between the asynchronous and the syn-
chronous version there is a very distinct difference in their performance: the output
frequency of the synchronous circuit is not analog any more, it has become digitized
by their coincidence with clock signals. The output signals are digital both with regard
to amplitude as with regard to time, they are of the digital-digital type (Chapter 5).
Voltage-to-frequency converters are used as input part of composite ADCs (Sec-
tion 6.6). The output part would be a frequency digitizer to convert that frequency into
a digitally coded output signal representing the input voltage. In telemetry the two
parts may be widely separated, e.g., with the frequency signal transmitted wirelessly.
This is an easy way to digitize signals from a remote sensor.
Problem
4.64. The quality of the charge balance VFC does depend neither on the stability nor
on the capacity of the conversion capacitor. Why?
4.5 Analog conversion of signal attributes | 291
a
𝑅S 𝐶1 D2
+
+
𝑣S D1 𝐶2 𝑅2 S 𝑣o
−
−
b
𝑣S
𝑣max
𝑣o
𝑣max
1 2 3 4 5 6 7 8
Fig. 4.50. Diode pump as a stair-case generator: (a) basic circuit; (b) input and output voltage.
292 | 4 Time and frequency (oscillators)
𝑅2
a
𝐶2
𝐶1 D2
+ −
𝑔𝑣 +
𝑣i D1 +
𝑣o
− −
D1
b
when reverse biased. Consequently, each output step is smaller than the previous one
resulting in an output signal form shown in Figure 4.50 b).
Using the basic relation 𝐶 = 𝑄/𝑣𝐶 there is an even shorter way to explain the func-
tion of a diode pump. The positive charge per pulse of 𝐶2 equals the positive charge of
𝐶1 ; it is 𝑄+ = 𝐶1 × 𝑣𝐶1 max . The negative charge per pulse flows through D1 to ground.
Thus, 𝐶2 does not get discharged but accumulates the positive charge of all input sig-
nals.
There are two remedies to improve linearity, i.e., to make the step size equal for
each pulse
– markedly reducing the output voltage behind the diode D2 , or
– avoiding the reverse biasing of D2 .
Figure 4.51 shows how this can be done using feedback (Section 2.4.3.2).
In Figure 4.51a 𝐷2 works against the virtual ground of the amplifier which hardly
changes its voltage when 𝐶2 gets charged. Thus, every input signal delivers the same
amount of charge into the charge sensitive amplifier (i.e., transimpedance amplifier)
that charges 𝐶2 .
In the bootstrap version of Figure 4.51b, the diode D1 does not work against ground
but against the voltage across 𝐶2 provided by a voltage follower. Again for each input
signal the diode D2 has the identical operating conditions so that for each input signal
the same amount of charge is delivered into 𝐶2 .
By reversing both diodes, a diode pump will respond to negative signals delivering
a negative output. The staircase output signal can be utilized for prescaling the input
frequency by a factor of 𝑛 by making the reference voltage of a comparator equal to
the voltage of 𝑛 steps. Thus, the comparator triggers after 𝑛 steps controlling a switch
to discharge 𝐶2 .
4.5 Analog conversion of signal attributes | 293
Diode pumps are integrating circuits because their output signal is proportional
to the sum of the accumulated input signals. To use such a circuit as a frequency con-
verter, equilibrium between incoming and discarded charge must be reached, i.e., the
charge collecting capacitor 𝐶2 must be discharged at a constant rate, e.g., by a shunt-
ing resistor 𝑅2 . By the choice of the time constant 𝑅2 𝐶2 the speed with which the cir-
cuit responds to a frequency change can be adjusted.
A modern monolithic FVC reverses the charge balancing technique of the VFC
(Section 4.5.3) by injecting a fixed amount of charge into a charge sensitive ampli-
fier (Section 3.6.4.1) each time an input signal arrives. By permanently discharging the
capacitor, as discussed before, a dynamic equilibrium is reached so that the output
voltage reflects the input frequency. Again a short time constant is needed to follow
quick frequency changes.
Problem
4.65. Name the two methods of linearizing the diode pump output.
5 Fully digital circuits
An analog circuit is designed so that after switching on the power supply it will assume
a unique quiescent operating point. Circuits that have their quiescent (output) oper-
ating point in one of the two flat portions of the transfer function (with zero gain) are
in one of two output states, L and H (Section 4.1). They can stay (temporarily) in either
state. Binary electronics is based on the existence of two distinctly different (output)
states which are usually described by their voltage values.
In this chapter, circuits that are fully binary are dealt with, i.e., both the amplitude
and the timing of their output signals do not reflect corresponding properties of an
analog (input) signal. Other circuits commonly thought to be digital, only because of
their two-state (L and H) output were dealt with in previous sections, e.g.,
– comparators (Section 2.3.4.2),
– relaxation oscillators (Section 4.1) including triggers (Section 4.4.1) and flip–flops
(Section 4.1.3.2), and
– gates (coincidence circuits) (Section 4.4.2).
When used in digital electronics, all these circuits lose their analog timing property
by synchronization of the output signal with an independent master clock.
Digital circuits may be categorized as:
– logic circuits,
– storage circuits (registers, memories),
– interface circuits (level shifters, code converters, serializer, deserializer, etc.),
– transmitter and receiver circuits,
– driver circuits (hardware management), and
– processors (arithmetic and others).
For obvious reasons any circuit that is computer specific is excluded, i.e., only ele-
mentary digital circuits of moderate complexity are dealt with which are found, e.g.,
in traditional digital measuring instruments.
Problem
5.1. Does a fully digital signal still contain analog component information?
For those not so familiar with operating points, a switch is usually used as an example
of binary behavior. In Figure 5.1, a simple ON/OFF switch connected to a real voltage
source with voltage 𝑣S and source impedance 𝑅S is shown. The so-called truth table,
i.e., Table 5.1 summarizes its behavior.
5.1 Basic considerations | 295
a b
+ 𝑣sw −
𝑅S
ON
+
𝑖sw
OFF 𝑖sw
+ OFF
𝑣S 𝑅L 𝑖S 𝐺S 𝐺L 𝑣sw
−
ON
−
𝑅Th 𝑖sw 𝑖sw
+ +
+ OFF OFF
𝑣S 𝑣sw 𝑖S 𝐺Th 𝑣sw
−
ON ON
− −
Fig. 5.1. Switching (a) a real voltage source and (b) a real current source.
A truth table is a listing of the values of all possible relations between the input(s), e.g., designated
as A, (B, C, etc.,) and of the corresponding output(s), often called Q or X, (Y,) and also Q or X1 , (X2 ,
etc.).
In Figure 5.1a, the switch lies in series to the source. After converting the real voltage
source to a real current source (Norton’s theorem), the switch lies in parallel to the
source. A switch is self-dual. If it is OFF (open-circuit, zero admittance), the voltage
across it is H (and the current through it is L). If it is ON (short-circuit, zero impedance),
the voltage is L (and the current H).
The two states of binary circuits, also called logical levels (either L/H or OFF /ON),
can be correlated to the two binary digits 02 and 12 or the two binary logic values
F(=False) and T(=True).
A logical system (in short logic) is called positive if H is assigned the value 12 (or T=True). If H is
assigned 02 (or F=False) the logic is negative. In circuit design, it is sometimes beneficial to switch
from positive logic to negative logic or vice versa.
Both types of logic are equivalent even if positive logic is preferred. In some cases,
it pays to use a mixed logic when it spares circuit components. Thus, depending on
296 | 5 Fully digital circuits
the employed logic, the same binary circuit can fulfill two different logic functions.
Therefore, we prefer the electronic states L and H rather than the logical states 0 and
1 in the truth tables whenever applicable.
We will refrain from digging deep into binary logic. In particular, we do not show
how the number of logic elements can be minimized. We will concentrate on binary
electronic components. This is the more justified as conversion of logical patterns into
circuits has been done most efficiently by the suppliers of integrated digital circuits.
The logical side is only half the truth. At least as important is the behavior of the
circuit in the time domain. As any process consumes time, there are, at least, two ef-
fects to be considered:
– Synchronization that is necessary to avoid spurious signals.
– Existence of busy time that signals the potentiality of signal loss.
Both effects cannot be accounted for by binary logic (truth tables). One bad example of
signal misalignment is demonstrated in Section 5.3.1.3. To avoid such problems which
usually will be very serious, synchronization of logic devices by applying a “clock”
signal to the strobing (enable) input is unavoidable.
Already the impossibility to generate a signal with zero rise time makes clear that
it is not possible to resolve two signals that occur within the rise time. Depending on
the operation performed inside the device to which a signal is applied, the resolving
time will be much larger than the rise time. Example 5.1 is intended to clarify the situ-
ation.
time of 1 μs, the device will be busy for 100 000 μs/s, i.e., 10% of the time. There-
fore, 10% of randomly arriving signals will not be processed, as the probability of
occurrence is the same in each time interval. Decreasing the busy time by a factor
of 100 to 10 ns reduces the loss accordingly to 0.1%. As is obvious, the loss can-
not be made to equal zero. As it is not possible to make the number of lost signals
zero it is necessary to measure the time the device is dead, or even better, the time
during which the device is receptive. This can best be done by devising an anti-
coincidence between the signals of the timer and the busy time establishing the
so-called live time; the actual time the device was operative.
(c) Statistical signal arriving in regular time bursts.
In this situation, a combined solution is advisable. If the time interval between
the burst is larger than the dead time there is no difference to case (b). Otherwise
it is necessary to count the lost bursts according to (a). To get a grip on the signal
loss within each time frame it is necessary to count (with as small dead time as
feasible) all arriving signals and those signals that get processed. This gives, to a
first order, the additional correction for the statistical nature of the signals.
Problems
5.2. Name as many self-dual electronic elements as possible.
Binary logic is the base for handling binary values, numbers with the two binary dig-
its 02 and 12 and the logical variables F and T. Using arithmetic operations (adding,
subtracting, multiplying, dividing, etc.) and logical decisions (e.g., equality, non-
equalities, inversion) logical codes can be designed to tackle nearly any thinkable
task. In all more complicated cases, such codes are not designed as direct binary
codes but in a computer language to be used in a microprocessor (computer).
It is an obvious fact that the time element makes it necessary to have a step-by-step
procedure when more than one logical operation must be accomplished. In binary
electronics, this is called serial operation or serial logic. To keep track of the meaning
of each step, we need a protocol that allocates significance to the position (in space and
time) and to the electrical state (to the binary digit = bit) of each binary element. To
avoid faults in the transmission of data, protocols contain redundancies, i.e., specific
information on the data at hand. By checking these redundancies, faulty data may
be recognized (e.g., by the parity check or the cyclic redundancy check (CRC)). Even if
these topics are highly important for data transmission we disregard this part of digital
circuit design as it is not basic.
A straightforward way to represent a number in a code is by weighting its places.
The usual decimal system uses a weighted code based on the number 10 by assign-
298 | 5 Fully digital circuits
ing the (decimal) digit at the 𝑛th place the weight of 10𝑛−1 , e.g., the decimal integer
345610 = 3 × 103 + 4 × 102 + 5 × 101 + 6 × 100 . The straight binary presentation of
numbers follows the same pattern: the binary integer 10110102 = 1 × 26 + 0 × 25 +
1 × 24 + 1 × 23 + 0 × 22 + 1 × 21 + 0 × 20 (which is in the decimal system 9010 ).
We must not only decide on the type of code but also generate a protocol which
decides
– on the maximum number 𝑁max of digits that the circuit can handle. Usually, this
will be a multiple of 4 (up to 128), and
– whether this number 𝑁max of digits is handled serially (sequentially) or in par-
allel. In the first case, a single binary element is used 𝑁max times to present the
number in question with the protocol assigning the correct weight to each step,
and in the second case, 𝑁max single binary elements must be present at the same
time (in parallel) so that in one step the complete number is electronically present.
In the latter case, the correct weight must be assigned to each element. Of course,
there could be a mixed presentation, e.g. two steps and 1/2 ×𝑁max binary elements.
As is easily seen, using binary notation of numbers means an inflation of digits (in
the above example seven binary digits are needed for the presentation of the decimal
number 90). Triples of digits are combined in the octal code, and quadruples in the
hexadecimal code reducing (formally) the number of digits to be considered. However,
this does not reduce the number of outputs in the circuit with the states L and H. Thus,
for circuits only binary codes are of importance. Even then there is, aside from the
weighted binary code, a plentitude of possibilities of representing all numbers from 0
to 2𝑛 − 1 in binary form. There are 2𝑛 ! variations possible. Of these, only a few usually
based on a weighted binary code are of importance.
Some codes are hardware oriented, in particular to meet the needs of counters or
encoders. In a binary-coded-decimal (BCD) code (which is a concession to our decimal
world), each decimal place is mapped into a number (≥ 4, ≤ 10) of bits with (≥ 6) un-
used code patterns. We will introduce the appropriate number codes as needed when
introducing the electronic circuits in question.
Problem
5.4. The symbolic presentation of a number depends on the base that is used. Present
the number 10 using the base 10, 9, 8, . . . to 1.
As life is sequential any process has a serial component. (It does not make sense to
build a one-shot electronic system.) Consequently, some processes as counting (based
5.2 Integrated logic technologies (miniaturization) | 299
on repeatedly incrementing an existing number) can only be done in series. Other op-
erations that can be done within a (short) resolution time can be viewed as being done
instantly, e.g., determining a coincidence (Section 4.4.2) or the use of a parallel ADC
(Section 6.4.2.1). In view of the fact that a timing component is unavoidable the notion
of a parallel (=one-shot) operation is difficult to realize. Let us call a (short) one-step
operation that occurs at some (naturally given) instant a parallel operation. Serial op-
erations are repetitive and are synchronized to some clock signal.
In many cases, the same result is obtained by using one element 𝑛 times (serial
operation) or by using 𝑛 elements once (see discussion in Section 6.4.2.1). Serial opera-
tion spares hardware, parallel operation time. There is a binary duality between serial
logic and parallel logic: the number of operational steps is inverse to the number of
operational elements. This duality between “time” and “space” is incomplete insofar
as there is no “single step” electronics. It will always be used sequentially. Although
single-step logic is not practical, a purely serial system is feasible. However, circuits
have become extremely cheap so that there is no need to spare circuits. Consequently,
we are refraining from dealing with purely serial digital circuits (e.g., serial memories,
serial shift registers) because pure serial logical circuits have de facto vanished.
Miniaturization is beneficial for both types of logic: more circuits can be realized
in the same area and the reduction of the length and the width (i.e., of the area) of
intracircuit connections reduces capacity speeding up the circuit allowing higher in-
ternal clock rates.
The usual configuration in a binary system is the repeated (=serial) use of circuits
which are arranged in parallel. This parallelism is expressed through the number of
bits that can be handled in one step by said circuit. Thus, essentially all digital systems
use parallel logic sequentially.
Problem
5.5. Does miniaturization conform to parallel or to serial logic?
by the production process, like the NMOS n-channel metal-oxide technology and the
PMOS p-channel metal-oxide technology, which were easier produced than the CMOS
complementary metal-oxide technology which is far superior with regard to power
consumption (Section 4.4.2.2) allowing higher cell densities. Since the introduction of
the first integrated circuits the speed increased by more than a factor of 103 and the
scale of integration by more than 106 .
Miniaturization usually has the following benefits:
– small mass (less weight, less material cost),
– small volume (the small surface is detrimental because it limits the transfer of heat
generated by the dissipated power requiring the use of power-saving technology
and of enforced cooling),
– small power consumption (allowing high density of components, i.e., complex
circuits),
– higher reliability (which is inversely proportional to some power of the strongly
reduced number of components used in the fabrication of a digital system),
– lower cost (cheaper assembly, allowing cheap mass production), and above all
– strongly reduced internal capacitances (resulting in the increased speed of the
internal clock and reduced propagation delay).
An integrated circuit, in short IC, is monolithic, i.e., it uses a single semiconductor chip
as a substrate. There may be analog, digital, and mixed circuits on the same chip. The
miniaturized circuits of the next chapter need a mixed architecture. In the following,
we will sketch the evolution of integrated digital circuits. In the beginning, an inte-
grated digital circuit contained transistors numbering in the tens performing logic op-
erations (by way of logic gates). We now call this technology “small-scale integration”
(SSI). The demand for increased integration resulted in “medium-scale integration”
(MSI). Each chip contained hundreds of transistors. The next step was “large-scale
integration” (LSI), with tens of thousands of transistor functions per chip. Already at
the beginning of LSI single chip processors became reality. Further development of in-
tegration was dominated by the needs of producing chips for computers and mobile
phones. Over the decades the number of transistor functions on integrated circuits
doubled approximately every two years. CMOS technology (Section 4.4.2.2) made it
possible to integrate millions of logic gates on one chip by way of “very large scale
integration” (VLSI). An important by-product of this miniaturization is the much in-
creased internal clock frequency (beyond 1 GHz) due to the extremely small capaci-
tance associated with the short and thin internal connections. The term “ultra-large-
scale integration” (ULSI) is used for a chip complexity of more than 1 million transis-
tor functions. Semiconductor memory chips with a capacity of several tens of billions
transistor functions have been produced. With chip integration in several layers by
way of three-dimensional integrated circuits (3D-IC) faster operation due to optimized
signal paths and reduced power consumption can be expected. Obviously, design and
production of such circuits is not straightforward. Integrated digital circuits have be-
5.2 Integrated logic technologies (miniaturization) | 301
come so cheap that designing and building discrete digital circuits is an expensive
and, in addition, time-consuming enterprise. Consequently, it does not make sense
to discuss the circuits in detail. (The basic principles have been covered in parts of
Chapter 4.) However, knowledge of the types of circuits and their general behavior is
indispensable. Thus, we concentrate on that.
Problem
5.6. Make sure to understand all the reasons for miniaturization.
Integrated digital circuits belong to one of several logic families. Members of one fam-
ily can directly act with one another, i.e., they can be interfaced (Section 3.5) by plain
wires. The output and input levels (L and H), the power supply voltage and the clock
rate are standardized within a family so that interoperability among them is guaran-
teed. Logic families usually have many members, each tailored for a specific task. They
can be used as “building blocks” to create complex circuits.
Among the presently most prominent families, there are two based on bipolar NPN
junction transistors (TTL and ECL), two based on field-effect transistors (CMOS and
NMOS for computer chips) and one family incorporating both types of active elements
(BiCMOS).
Over the years integrated circuits have become faster and more complex. After
having minimized the parasitic capacitances, time constants can only be made shorter
by making resistances smaller. Lowering the resistances requires lowering the power
supply voltage to keep the dissipated power small. Thus, circuits operating with low
supply voltages were designed. The greatest power saver is, however, the use of com-
plementary circuits (CMOS, Section 4.4.2.2). Only this technology made VLSI possible.
The difference in technology between the families resulted in different supply volt-
ages and different voltage levels for L and H. Interconnecting circuits of any two logic
families requires special interfacing techniques, e.g., so-called pull-up resistors. Thus,
CMOS circuits were developed that are completely compatible with the TTL family,
even pin-compatible. In the case of TTL the introduction of Schottky technology was
a breakthrough with regard to speed. Merging TTL technology with CMOS technology
got hold of the best of both technologies in the BiCMOS family.
Programmable logic devices (PLD, firmware) are in competition to the fixed-wired
circuits of the logic families. Different LSI-type functions such as logical gates, flip–
flops and adders can be implemented on a single chip. Present field-programmable
gate arrays contain tens of thousands of LSI circuits which can be operated at high
speed.
As a rule the logic states high H and low L are represented by two voltage levels
quite different for each logic family. Inside a circuit the logic levels are narrowly de-
302 | 5 Fully digital circuits
𝑉+
Q3
+
Q2
Q1 𝑣o
Q4
Fig. 5.2. SSI: Schematic of a 4-input positive logic NAND gate in TTL technology.
fined as, e.g., we discussed for the Schmitt trigger (Section 4.1.2.1). Aside from these
logic levels there are the so-called wire levels for outside signals allowing some tol-
erance in the voltage levels. These tolerances are necessary to take into account the
fan-out, i.e., the loading of the circuits. Between the L-band of allowed voltages of L-
levels and the H-band of allowed voltages of H-levels there is an invalid, intermediate
voltage range in which a state would be undefined, representing a faulty condition.
Of course, during logic transitions the instantaneous operating points sweep through
this range. If this sweep is fast enough, i.e., the rise and fall time of the input signal
short, the existence of this forbidden band is not noticed. However, if by chance or
on purpose the slew rate (or rise time) is too slow, oscillations between L and H may
occur (see also meta-stability, Section 5.3.2).
Problem
5.7. In which regards do logic families differ from each other?
+
𝑅1 𝑅3
𝑣o
Q4
B −0.8 V
−
−1.6 V
−0.8 V A Q1 Q2 Q3 −1.2 V X
−1.6 V
−2.5 V
𝑅2
−5 V
Fig. 5.3. Circuit diagram of a 2-input positive logic OR gate in ECL technology.
tors one can conclude that it is more difficult to produce integrated circuits with pnp
transistors and even more so to produce integrated circuits based on complementary
bipolar junction transistors which, on the other hand, has been done with CMOS FETs
for a long time already. Combining bipolar junction transistors (with their high speed
and high gain) with CMOS FETs (with their high input impedance) constitutes an ex-
cellent condition for constructing fast low-power logical gates. Combination of the two
technologies has existed for a long time in discrete circuits, however, without imple-
mentation in integrated circuits such circuits were not frequently used.
First, the BiCMOS technology was applied in operational amplifiers and other
analog circuits (e.g., comparators, voltage regulators, DACs). High current circuits use
metal-oxide-semiconductor field-effect transistors for efficient power control, whereas
bipolar elements perform specialized tasks. Several integrated microprocessors are
based on BiCMOS technology. BiCMOS will not replace CMOS in pure digital logical
circuits because it cannot compete with the low power consumption of CMOS. It is
used for special tasks as, e.g., bus transceivers. Reducing the number of chips in elec-
tronic systems by combining two chips into one increases the speed and reliability,
and reduces cost and size.
practically all binary circuits will have a clock input necessary for synchronizing the output be-
cause of the intrinsic necessity of serial operation.
The clock signal “enables” the transmission of the logical result to the output. Conse-
quently, we use the letter E for the input of the clock signal, the enable signal or the
strobe signal.
There are circuits without feedback, so-called combinational circuits, for which
a simple truth table relates all input values to all output values. Then there are the
so-called sequential circuits, for which the current output values are a function of the
values of the current inputs, past inputs, and past outputs.
Problem
5.11. Which timing information contains a fully digital signal?
5.3 Basic circuits | 305
a d
A
A Q Q
B
b e A
A
Q Q
B
B
c f
S Q
A
Q
B
R Q
Fig. 5.4. Composition of six basic logical circuits by using two-input NAND gates: (a) NOT gate,
(b) AND gate, (c) OR gate, (d) NOR gate, (e) XOR gate, (f) SR flip–flop.
It is curious that any basic logic circuit can be built from two-input NAND or NOR gates
which for that reason are called universal gates. Figure 5.4 shows six configurations
with NAND gates representing six basic logic circuits. The symbol used in this figure
is that of a two-input NAND gate. At the moment, we are not going to discuss it any
further.
Problem
5.12. Redo Figure 5.4 using two-input NOR gates.
A Q Ap Qp An Qn
L H 0 1 1 0
H L 1 0 0 1
306 | 5 Fully digital circuits
5.15. Generate the truth table of a two-port NAND gate used as a NOT gate.
Table 5.3. Truth tables of the AND gate (Q), the NAND gate (Q), and for positive logic.
A B Q Q A B Q Q
0 0 0 1 L L L H
0 1 0 1 L H L H
1 0 0 1 H L L H
1 1 1 0 H H H L
a b
Fig. 5.6. Logical symbol of a two-input
A Q A Q
B B AND (a) and a two-input NAND gate (b).
5.3 Basic circuits | 307
Table 5.4. Truth tables of the OR gate (Q), the NOR gate (Q), and for positive logic.
A B Q Q A B Q Q
0 0 0 1 L L L H
0 1 1 0 L H H L
1 0 1 0 H L H L
1 1 1 0 H H H L
a b c
A
A Q A Q Q
B B
B
Fig. 5.7. (a) Logical symbol of an OR gate, (b) logical symbol of a NOR gate, and (c) two-input NOR
function realized by four two-input NAND gates.
If one compares the truth table of the positive logic AND gate with the negative
logic OR gate, it is obvious that these truth tables are identical, i.e., one circuit acts
either as the AND gate (using positive logic) or as the OR gate (using negative logic).
Using the letters L and H rather than 0 and 1 (as usually done) for the two output levels
of binary electronic circuits is without ambiguity. If done differently, one must state
the type of logic that is used. Then, the ambiguity disappears even when using the
logical values 0 and 1.
In the following, we are going to follow this practice to deal with logical functions rather than with
electronic circuits.
Figure 5.7 shows the logical symbol for the OR and the NOR gate, and the realization
of a two-input NOR gate by four two-input NAND gates.
Let us investigate the electronic behavior of a specific NOR circuit (Figure 5.8a).
In this case, the signal to the second input passes through three inverters so that its
value is inverted. A truth table would show that the output value is 0 independent of
the input signal. However, when looking at the voltage levels (Figure 5.8b) – assuming
positive logic – we notice a short H signal at the output with a length of three propa-
a b
Input A
A Q
Delayed inverted input
Output Q
Delayed inverted input
Fig. 5.8. (a) NOR circuit with three inverters in front of one input, and (b) voltages at the two inputs
and at the output.
308 | 5 Fully digital circuits
gation delays of a NOT circuit. This teaches us the difference between binary logic and
digital circuitry. To avoid such “surprises” one must either make sure that the path
lengths of signals do not differ too much or use a clock signal that defines the moment
of the correct answer (as given by the truth table).
NAND or NOR circuits are called universal gates because they may be used to build any other logical
circuit.
Problems
5.16. Generate the truth table of a two-input OR gate made of three NAND gates.
5.17. Generate the truth table of a two-input NOR gate, made of four NAND gates.
5.18. Construct a NOT circuit from a two-input NOR gate and give the truth table.
Problem
5.19. Generate the truth table of a two-input XOR gate made of five NAND gates.
a b A
A Q A Q Q
B B
Fig. 5.9. (a) Logical symbol of an XOR gate, and the function realized by two-input NAND gates (b)
and (c).
A S
B
Fig. 5.10. Realization of a half-adder. S is the sum output, and C the carry
C output.
5.3 Basic circuits | 309
A B C S
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
In Section 4.1.3.2, we came across the bistable multivibrator with the two stable qui-
escent operating points (voltage levels) L and H. A typical bistable electronic circuit
is symmetric, i.e., it has two primary inputs and two outputs, Q and Q. The output Q
delivers the inverted value of Q, i.e., if one output is H, the other is L, and vice versa
(see the symbol in Figure 5.11).
Without signal, the output operating point will stay in its binary state as long as
the circuit is operative, i.e., as long as the necessary power for its operation is supplied.
Thus, the binary information is stored allowing the use of this circuit as a memory cell.
Such a memory is volatile, i.e., the information gets lost if the supply voltage is gone.
It is not within the scope of this book to discuss memories in general, neither their
structures nor the many ways they can be realized.
Bistable circuits are used extensively in semiconductor registers and memories.
If a bistable circuit is intended for data storage, it is called a latch. A latch where the
output value is the instant result of a signal at the input is called transparent. By means
of an additional input (named enable), it becomes nontransparent (or opaque).
By means of input signals, the output level may be switched forth and back from
one state to the other. If switching the circuit is the primary intention, then the name
flip–flop for such circuits would rather be used. Flip–flops are used in counters or for
synchronizing input signals fluctuating in time to some reference timing signal. The
latter application requires clocked (synchronous or edge-triggered) flip–flops.
Feedback as applied in Section 4.1.3.2 to active three-terminal components may
be applied to (integrated) gates, as well. As shown in Figure 5.12, a latch can be con-
S Q
Fig. 5.11. Symbol of an SR latch. The symmetry of a bistable circuit with the two
R Q
inputs S and R and the complementary outputs Q and Q is obvious.
S Q
Table 5.6. Truth table of a NAND gate latch; Q𝑛−1 previous (output) state, Q𝑛 (output) state with
input signal applied.
R Q𝑛−1 Q𝑛 S Q𝑛−1 Q𝑛
1 0 or 1 Q𝑛−1 1 1 or 0 Q𝑛−1
0 0 or 1 1 1 1 or 0 0
1 0 or 1 0 0 1 or 0 1
0 0 or 1 Irregular 0 1 or 0 Irregular
structed by NAND gates forming the so-called NAND-gate latch. Its truth table is given
in Table 5.6. A 1 at input R resets the output Q to 0, a 1 at input S sets the output Q to
1. If both input levels are 0, no action is taken, i.e., the state of Q remains unchanged,
namely Q𝑛−1 .
Note that the feedback of the NAND gates is mutual; each one is in the feedback
loop of the other. Since either is inverting, the two stages connected in series provide
the noninverting amplification needed for positive feedback.
Flip–flops with an additional enable input can be clocked (pulsed or strobed).
Such devices ignore the signal at the inputs except at the transition of an enabling sig-
nal. Clocking affects the flip–flop to either change or sustain its output signal depen-
dent on the value of the input signals at the time of the transition. A flip–flop changes
its output state either on the rising edge of the clock signal or on the falling edge. The
data transfer from input to output at the moment of the enabling signal (clock or oth-
erwise) introduces two additional properties that must be considered
– propagation delay, and
– meta stability.
Table 5.7. Truth table of an SR latch; Q𝑛−1 previous (output) state, Q𝑛 (output) state with the input
signal applied.
R S Q𝑛−1 Q𝑛 Function
0 0 0 or 1 Q𝑛−1 Storage
1 0 0 or 1 0 Reset
0 1 0 or 1 1 Set
1 1 0 or 1 n/a Irregular
From the truth table (Table 5.7), we see that the set signal (value 1 at the S input)
produces the state 1 (at the Q output) independent of the previous state. The reset
signal (value 1 at the R input) produces the state 0 (at the Q output) independent of
the previous state. Obviously it would be irregular to have the value 1 at both inputs.
Figure 5.13 shows a schematic of an SR latch. It is a NAND gate latch with inverters
at the inputs.
To overcome the irregular state, one can add logic circuitry that converts the input
pair (S,R) = (1,1) to one of the allowed combinations. That can be:
– an S-latch, converting (1,1) to (0,1), i.e., Set is dominant,
– an R-latch, converting (1,1) to (1,0), i.e., Reset is dominant, and
– a JK-latch with Q𝑛 = Q𝑛−1 for inputs (1,1), i.e., the (1,1) combination would toggle
the output.
If, as in Figure 5.14, the two input NAND gates are activated by a common enable signal
E, a synchronous SR latch (or clocked SR flip–flop) results.
The enable input signal applied at E can be called enable, read, or write, or strobe,
or clock signal. When the flip–flop is enabled S and R signals can pass through to the
S Q
S Q
E
R Q
Fig. 5.14. Logical symbol of a synchronous SR flip–flop.
312 | 5 Fully digital circuits
(Q, Q) outputs, i.e., the latch is transparent. When not enabled the latch is closed, and
the outputs remain in that states that existed at the end of the last enabling signal.
Problem
5.21. Generate the truth table of an SR flip–flop using four two-port NAND gates.
Clock D Q𝑛
Rising edge 0 0
Rising edge 1 1
Else 0 or 1 Q𝑛−1
From the truth table one can see that without enable/clock signal the level of the D
input has no effect on the output. With the rising edge of the enable signal the output
value becomes that of the D input. If, as in Figure 5.15, both an S and an R input is
present, these inputs have the same function as in the SR flip–flop, overriding the
signal at the D input.
Cascading two D memory cells as shown in Figure 5.16 makes it possible to have
during some time interval (the duration of the clock signal) both the new and the pre-
vious D signal at one’s disposal. Such an arrangement is called master–slave edge-
triggered D flip–flop. The enable signal for the second stage is inverted so that at the
time of the 1 to 0 edge of the enable signal a 0 to 1 edge is created which is delayed by
S
D Q
E Q Fig. 5.15. Logical symbol of a clocked D flip–flop with enable (E), reset (R), and set
R
(S) input.
D D Q D Q
1 2
C E Q E Q Fig. 5.16. Symbolic presentation of a master–slave
D flip–flop.
5.3 Basic circuits | 313
0 0 or 1 1 to 0 0
1 0 or 1 1 to 0 1
the length of the clock signal. Therefore, the output of the first stage reflects already
the new input state D𝑛 whereas that of the second stage still has the information on the
previous input state D𝑛−1 . Only after the 1-state of the clock signal is over (zero) both
flip–flops are in the same state. Table 5.9 presents the truth table of a master–slave D
flip–flop. The inversion of the clock signal for the slave latch results in a synchronous
system with a two-phase clock, where the operation of the two latches with different
clock phases prevents data transparency.
Problem
5.22. What is the purpose of master–slave flip–flops?
T Q
E Q
Fig. 5.17. Logical symbol of a strobed T flip–flop.
Table 5.10. Truth table of a T flip–flop when the enable is active (H).
T Q𝑛−1 Q𝑛 Action
0 0 0 Hold state
0 1 1 Hold state
1 0 1 Toggle
1 1 0 Toggle
314 | 5 Fully digital circuits
J Q
E
K Q
Fig. 5.18. Logical symbol of a strobed JK flip–flop (with the enable input E).
Table 5.11. Truth table of a basic JK flip–flop with the active enable input.
J K Q𝑛 Action
Problem
5.23. Name the main use of T-flip–flops.
5.3.2.4 JK flip–flop
In Figure 5.18, the symbol of a strobed JK flip–flop is shown.
When comparing the truth table Table 5.11 with that of the SR flip–flop (Table 5.7)
we see that J and S perform the set, and K and R the reset task. However, now the
S = R = 1 (i.e., the J = K = 1) condition makes the flip–flop toggle, i.e., it changes
the output to the logical complement of its current value.
The JK flip–flop is the general type flip–flop because it can be used as an
– SR flip–flop by making S = J and R = K, a
– D flip–flop by setting K = J (the complement of J), or a
– T flip–flop by setting K = J.
The timing diagram of Figure 5.19 should support a better understanding of the per-
formance of a JK flip–flop.
Problem
5.24. Verify that a JK flip–flop is a combination of a T flip–flop and an SR flip–flop.
Clock
A register is an array of binary storage elements (e.g., latches) which stores informa-
tion and makes it readily available to logical circuits. To this end, information must
be written into the register and be read from the register. In special applications, the
information may be modified, as well. One flip–flop can store one bit of information. A
group of 𝑛 flip–flops forms an 𝑛-bit register. The state or content of a register is stored
bitwise in the flip–flops. Obviously, the number of 0–1 combinations in a register is
finite. It is convenient to embrace the bitwise data in words of 𝑛 bit with typical word
lengths of 22 , 23 , 24 , 25 , or 26 bit. For that reason, flip–flops are united to form 𝑛-bit
registers storing data words of 𝑛 bit. All flip–flops of a register are synchronously con-
trolled by a single clock line.
If the output of each flip–flop of a register is accessible, such a register has random access, i.e.,
any bit can be read at any time.
Problem
5.25. By connecting the end of a delay line (coaxial cable) via a signal regenerating
circuit (e.g., a Schmitt trigger) to the input, binary information (bits) can be stored in
this volatile serial register.
(a) Which properties of the delaying medium limit the number of bits that can be
stored?
(b) Can the stored information be accessed randomly?
If 𝑛 flip–flops are cascaded (i.e., output connected to the input of the next flip–flop)
and all flip–flops are synchronously controlled by a single clock line, we have an 𝑛-bit
shift register. Figure 5.20 is a schematic of a 4-bit shift register using D flip–flops.
Because of the synchronous operation the signal on the D input is captured only
at the instant the flip–flop is clocked and is ignored at other times. Signals at optional
additional inputs (set or reset) may act either asynchronous or synchronous with the
clock.
QA QB QC
Data D Q D Q D Q D Q QD
E E E E
Clock
On each active transition of the clock, a shift register shifts the contents of one
flip–flop to the next (to the right). The first flip–flop accepts the input data for storage.
The data stored in the 𝑛th stage is removed. It may be used by other logic circuits.
Thus, a shift register has one input. As the signals at the output of each flip–flop are
accessible, it has 𝑛 outputs, i.e., 𝑛 bits (i.e., the register’s complete content) may be
read simultaneously.
From mathematics, we know that shifting the decimal point in a number by 𝑛
places reduces or increases the said number by 10𝑛 , depending on the direction of the
shift. Thus, it is no surprise that shifting a weighted binary number that is stored in a
shift register 𝑛 times reduces or increases this number by a factor of 2𝑛 depending on
the direction of the shift. Thus, shift registers may be used for multiplying or dividing
by 2𝑛 , depending on protocol, i.e., whether the most significant bit is assigned to the
first or the 𝑛th flip–flop.
Problem
5.26. A weighted binary number stored in a shift register is shifted by one place. The
𝑛th flip–flop stores the most significant bit.
(a) Is the resulting binary number divided by 2 or multiplied by 2?
(b) Name two conditions necessary to yield an exact result.
DA DB DC DD
If the output of a register is fed back into its input, a cyclic register results. Thus, loss
of the data in the 𝑛th bit can be avoided. After 𝑛 shifts the original register content
is restored. The data pattern present in the shift register recirculates as long as clock
pulses are entered.
5.29. What is the disadvantage of a straight ring counter over the standard binary
counter?
318 | 5 Fully digital circuits
QA QB QC QD
D Q D Q D Q D Q
A B C D
E Q E Q E Q E Q
R R R R
Reset
Shift
Fig. 5.22. Switch tail ring counter with a 4-bit shift register using D flip–flops.
5.30. What is the advantage of a straight ring counter over the standard binary
counter?
Example 5.2 (Performance of a 4-bit switch-tail ring counter). By resetting all flip–
flops (by a signal to the input R), the switch-tail ring counter is put into the starting
condition with a bit pattern of 0000 (Table 5.12). The first (shift) signal received at the
Table 5.12. Decimal equivalent of the output pattern of the switch tail ring counter of Figure 5.22.
QA QB QC QD
0 0 0 0 010
1 0 0 0 110
1 1 0 0 210
1 1 1 0 310
1 1 1 1 410
0 1 1 1 510
0 0 1 1 610
0 0 0 1 710
5.4 Registers (involved sequential circuits) | 319
Ring counters and, in particular, switch-tail ring counters are typically used as decade
(and binary) counters, as decimal (and binary) display decoders, and for frequency di-
vision (as prescalers, i.e., divide-by-n counters, e.g., in timers). They are also available
as integrated circuits. An integrated 5-bit switch-tail ring counter will have 10 decoded
outputs for decimal use.
Problem
5.32. What is the advantage of a switch-tail ring counter over the straight ring
counter?
T or JK flip–flops (in a register) will do just that with binary numbers. Although count-
ing can be done unstrobed, it is often performed in relationship to a clock signal (using
strobed flip–flops). Let us take a 4-bit register. After a reset signal, its contents will be
00002 which is 010 . A signal to the first stage will toggle its content so that 00012 (i.e.,
110 ) is stored in the register. The second signal will toggle the state of the first flip–flop
to 0. If a transition from 1 to 0 at the output of a flip–flop is used to toggle the next flip–
flop, there will be a 4-bit binary counter. Table 5.13 shows the binary pattern obtained
this way.
Thus, the contents of the counting register are a 4-bit word given in the basic
weighted binary code (Section 5.1.1). There are counters that deliver the result in a
different code (e.g., ring counters, Sections 5.4.2.1 and 5.4.2.2, or BCD counters, Sec-
tion 5.4.3.3). The counter described here is an asynchronous counter because only the
first stage is incremented by the input signal the others are incremented by the output
signal of each previous stage.
320 | 5 Fully digital circuits
Table 5.13. Binary pattern in a 4-bit binary counter when counting up to 15.
0 0000 1111
1 0001 1110
2 0010 1101
3 0011 1100
4 0100 1011
5 0101 1010
6 0110 1001
7 0111 1000
8 1000 0111
9 1001 0110
10 1010 0101
11 1011 0100
12 1100 0011
13 1101 0010
14 1110 0001
15 1111 0000
Due to its repetitive pattern, counters may be used as prescalers. A 4-bit binary
counter downscales the input signal rate by 24 = 16. Prescaling modulo 𝑘 (with 𝑘
any natural number) can be accomplished with an 𝑛-bit counter if 2𝑛 ≥ 𝑘. To shorten
the repetition cycle to a desired length there are two easy ways. The obvious one is
to reset the register to 0 with the next count after the contents have become 𝑘 − 1.
Alternatively, the next count after the contents have become 2𝑛 − 1 sets the counter to
2𝑛 − 𝑘, accomplishing the same prescaling by 𝑘.
If the prescaling factor 𝑘 is fixed and even, it is good practice to split the prescaling
circuit up into two stages. The first stage should be purely binary, and the other stages
should provide the prescaling by the remaining factor 𝑘/2. This way the higher signal
frequencies are handled by straight binary prescaling without the need of any logical
decisions. Such provisions reduce the dead time of a counter which is essential when
statistically arriving signals are to be counted.
If a frequency is to be divided by means of a prescaler, a synchronous counter
should be used.
Problem
5.33. What dead time would the first stage of a scaler need so that it does not loose a
single statistically arriving signal?
a
𝑉H
QA QB QC QD
J Q J Q J Q J Q
𝑣i E E E E
K Q K Q K Q K Q
R R R R
R
b
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
𝑣i
QA
QB
QC
QD
Fig. 5.23. (a) Schematic of an unstrobed asynchronous 4-bit counter (b) signal waveforms.
overflow) is connected to the input of the next one. These overflows “ripple” from
stage to stage, which leads to delays at the higher data bits making this technique unfit
for synchronous applications. However, it is fit for prescaling applications. Figure 5.23
shows a schematic of an unstrobed (asynchronous) 4-bit counter together with the
signal waveforms.
In Figure 5.23b, the numbering of the counts starts with 110 and not with 010 as
done in Table 5.13. Be aware of this inconsistency. One should be familiar with this
problem as the first 10 digits are from 0 to 9, but one counts from 1 to 10! Besides,
observe that the frequency of the output signal is highest at Q1 (the first stage).
Therefore, the highest repetition rate that can be handled depends on the speed of the first stage.
As the output of normal counters will never have a negative number, there is no need
to provide storage for negative numbers reducing the needed numbers of memory cells
by a factor of 2.
Problem
5.34. What is the reason for the name asynchronous counter?
322 | 5 Fully digital circuits
𝑉H QA QB QC QD
Clock
J Q J Q J Q J Q
E E E E
K K K K
bit at the place of the number, e.g., 910 would be 10012 using the first code and 11112
with the second code. To operate a display from the output of a BCD counter one needs
a decoder that converts the output signal at the four flip–flops to the appropriate input
signals of the display. If a popular seven-segment display is used, such a decoder would
be called a four-to-seven decoder.
The combination of a divide-by-two counter and a divide-by-five counter to ac-
complish a prescaling by a factor of ten (i.e., to design a decimal counter) is insofar
superior as the highest frequency is handled without the need of any logic by a single
flip–flop that can be viewed at as a fast prestage. In that case, another weighted code,
the 5-4-2-1 code, is appropriate.
Problems
5.36. A straight binary counter uses all 16 different patterns provided by the 4 bits.
A decimal counter only needs 10 to represent the decimal numbers from 0 to 9. How
many variations of unique codes exist for the representation of these 10 numbers?
5.37. Give the binary equivalent of 910 using the 5-4-2-1 code.
The result obtained by any counter is basically an integer. Thus, the counted number is without
uncertainty.
Problem
5.38. Design a divide-by-two prescaler using a down counter.
𝑉H
QA QB QC QD
J Q J Q J Q J Q
𝑣i E E E E
K Q K Q K Q K Q
QA QB QC QD
Fig. 5.25. Schematic of an asynchronous simultaneous 4-bit up/down counter based on strobed JK
flip–flops.
324 | 5 Fully digital circuits
Time is a very important parameter in binary and other circuits. It appears in several
parameters
absolute time – in relation to an absolute clock,
synchronization – in relation to an internal clock,
rise and fall time – occurring in individual signals, and
pulse length – which is by itself an analog quantity.
In each of these cases, the so-called time jitter must be considered (Section 4.4.1.1).
Disregarding the amplitude there is no difference between analog and digital
(rectangular) signals. However, there exist two amplitude levels for binary signals
within a rather narrow band in amplitude whereas the amplitude of analog signals
may be anywhere in the full dynamic range. With regard to time, only one property
can be claimed to be truly digital. A truly digital signal is synchronized to a clock
allowing serial operation. In this case, the timing is stringent but arbitrary, i.e., the
exact frequency of the clock does not really matter. Thus, only signals synchronized
by a clock are digital in time.
Problem
5.39. An analog output signal will have an intrinsic rise time (showing up in the input
signal) and a circuit dependent rise time (from the transmission through the circuit).
How does the intrinsic rise time of the input signal affect the output signal of a digital
circuit?
Serial logic processes require a clock signal for synchronous operation. It must be
uniquely known in any timing period which step of the protocol is executed. This
sounds trivial for single tasks in isolated systems. However, if several systems have to
work together or when multitasking is performed or when distributed (computer) sys-
tems must cooperate, it can become very difficult to ensure that the actions of the var-
ious systems fit together. This requires process synchronization which establishes the
necessary sequence of action. In a centralized system, the clock of the central server
dictates the time. In a distributed system, a global time is not easily established. The
network time protocol is commonly used for distributed clock synchronization.
Synchronization at the lowest level, i.e., of individual circuits, is done by means of
a clock signal at the enable input. One can distinguish two variants of how the timing
is derived from the clock signal:
(a) Coincidence with the clock signal: enable is enacted during the (usually positive)
clock signal
(b) (Rising or falling) leading edge triggering: When shortening the active clock signal
(within an element) to a very short time it will be active only at the rising edge of
5.5 Time response (to binary pulses) | 325
the clock signal. Thus, the active time is much more narrowly defined than using
all of the clock pulse.
For this reason, converting analog information into a digital one is particularly impor-
tant. The three important analog variables are as follows:
– amplitude,
– time (instant of a signal, or time difference; rise time of a pulse is appendant to
time difference), and
– frequency.
If the instant is digitized, it means that the output signal is synchronized to a (refer-
ence) clock as done, e.g., in sampling (Section 4.4.3.1) and in the synchronous counter
(Section 5.4.3.2). Digitizing a time difference (Section 6.2) is quite something else.
Measuring without digitizing is not possible. The result of any measurement is a num-
ber, a digitized result, in most cases a floating point decimal number. However, count-
ing does not need any digitization as the elements to be counted are quantized. Con-
sequently, a counted result is an integer number. Digitization in electronics means (in
almost all cases) to transform an analog signal into numerous L and H states (Sec-
tion 4.1.2.1) which then are interpreted as binary digits (e.g., for positive logic as 0 and
1) to build a binary (based) number by a diversity of codes.
An analog-to-digital converter (in short ADC, or A/D) converts a continuous phys-
ical quantity at some moment to a number representing the quantity’s amplitude.
This number is a multiple of the quantization unit, in binary conversion the least-
significant-binary unit (in short: bit, respectively LSB). The conversion introduces
6.1 General properties of digitizing | 327
Resolution gives the number of discrete values that are produced over the full ampli-
tude range of the analog value (its full scale FS or dynamic range). The values are
usually stored by way of a binary code so that the resolution is expressed in bits (bi-
nary digits). Therefore, the number of discrete values is a power of 2. An ADC with
a 10-bit output has a range of 1024 (= 210 ≈ 103 ) unique output codes defining
an equal number of input signal amplitude levels. Each voltage interval lies between
two consecutive code levels. Normally, the number 𝑁 of voltage intervals is given by
𝑁 = 2𝑛 − 1, where 𝑛 is the ADC’s resolution in bits. Over the full range of an ADC with
10 bits, there will be exactly 1024 unique binary output numbers (from 00000000002
to 11111111112 , corresponding to the decimal numbers 010 to 102310 ). Thus, such an
ADC provides a resolution of about 0.1% (10−3 ).
The duration of the digitizing process of an ADC determines its conversion rate or
sampling frequency. The latter term applies for ADCs that are continuously sampling
analog inputs. Like resolution, the required conversion rate depends on the specific
application for which the ADC is needed, i.e., on the highest frequency present in the
analog input signal. When the system is too slow, fast changes of the analog signal will
be missed. As discussed (Section 4.4.3.1), the highest frequency waveform an ADC can
convert correctly is the Nyquist frequency that equals one-half of the ADC’s conversion
rate.
If the input signal contains a frequency that exceeds the Nyquist frequency of that
ADC, the digitized signal will have a false low frequency. This phenomenon is called
aliasing. Figure 6.1 demonstrates how aliasing comes about.
Figures 6.1a through e show cases of aliasing where there is no way whatsoever to
reconstruct the original analog signal from the digital output. It should not surprise
that the practical sampling rate required for a good representation of the analog sig-
nal by the digital output must be considerable higher than twice the maximum input
328 | 6 Manipulations of signals (digitizing)
a 𝑣i b 𝑣i
𝑡 𝑡
𝑣o 𝑣o
𝑡 𝑡
c 𝑣i d 𝑣i
𝑡 𝑡
𝑣o 𝑣o
𝑡 𝑡
e 𝑣i f 𝑣i
𝑡 𝑡
𝑣o 𝑣o
𝑡 𝑡
Fig. 6.1. Demonstration of cases of aliasing. The sampling rate is not high enough so that the digital
output signal cannot catch the frequency of the analog input signal. Observe the difference between
the analog and the digital period.
frequency (which is just the theoretical low limit). For efficiency reasons, signals are
sampled at the minimum rate. Sampling at a much higher rate than demanded by the
Nyquist–Shannon sampling theorem is called oversampling. Oversampling allows dig-
ital operations improving the result of the conversion.
If a sufficiently high sampling rate is not available, the too high frequencies must
be removed from the input signal by means of a low-pass filter. This filter distorts the
input signal by removing frequencies above half the sampling rate but the digitized
output will still resemble the original analog signal to some degree. This would not
be the case at all if aliasing occurs due to the admixture of higher frequencies. It is
obviously better that such frequencies get lost than to have them causing aliasing pro-
ducing false output signals. Such a filter is called anti-aliasing filter and is essential
for practical ADC systems if the input signal contains higher frequency components.
6.1 General properties of digitizing | 329
Q1
Q2
Q3
Q4
+ ADC Q5
Q6
𝑣i Q7
Fig. 6.2. Symbol of a device performing
Q8
analog-to-digital conversion. The answer
− to an analog input signal is a binary output
signal.
The process of digitizing is, as any process, a sequential one, i.e., it takes time to com-
plete it. During the conversion time, the input value must be held constant, e.g., by a
circuit called sample-and-hold (Section 4.4.3.2). Monolithic ADCs usually provide the
sample and hold function internally. Because of the finite conversion time, it is impos-
sible to completely digitize a continuous variable. Digitization can be done either by
sampling at instants given by a clock (Sampling, Section 4.4.3.1) which digitizes the
instant of the sample, too, or at natural instants like the occurrence of a pulse (or of
its maximum). In both cases, there will be a time span afterwards, called dead-time,
in which no other conversion process is possible. This period is either called busy time
or dead time. Depending on the digitizing method the dead time is either independent
of the size of the signal or not. The sampling process by which amplitude values are
acquired at discrete instants in time is irreversible as the timing information cannot
be recovered.
Higher sampling rates require faster digitizing. Fast digitization is accompanied
by inferior performance with regard to resolution. As a consequence, there does not
exist a single method that is best for all applications.
The conversion time is often decisive for the choice of a particular digitizing
method. However, there is a trade-off between resolution and conversion time. This
330 | 6 Manipulations of signals (digitizing)
The basic digitizing process is quantization: Continuous (analog) values are con-
verted into quantized values being multiples of the smallest quantized value, the
least-significant bit (LSB). Quantization is irreversible as it is associated with the loss
of information.
There are two limiting factors for the resolution of a digitizing process:
– the capability of the digitizing circuit, and
– the ratio of the noise level of the signal to the full amplitude (dynamic) range of
the device.
the useful resolution of a converter is restrained by the signal-to-noise ratio (SNR) of the signal
For some applications (in particular in the field of audio systems), admixed noise can
be beneficial. Dither, a small amount of random (white) noise, which is added to the
input before conversion, prevents that signals with a height of the order of the least
significant bit are digitized either as 0 or 1. The effect of dither is to cause the state
of the LSB to switch randomly between 0 and 1 rather than sticking to one of these
values. Figurative this means that the least significant bit appears to be gray rather
than white or black. This is beneficial when the binary signal is converted back to an
audio output.
In low-bandwidth applications (e.g., audio), white noise is usually of the order of
1 μV rms. If the dynamic input range equals that of the most significant bit of a stan-
dard 2 V output signal, noise limits the useful number of bits to 20 or 21. Most com-
mercial converters sample with 6 to 24 bits of resolution. However, as discussed, noise
from the circuit components does not allow the full use of the highest resolutions.
Usually, conversion is done linearly. If a different response (e.g., logarithmic) is
desired, this can either be achieved
– by nonlinear distortion of the analog signal (Section 2.5.1.2) or
6.1 General properties of digitizing | 331
𝑣thr +𝛥𝑣
+
−
𝑣thr +
𝑣thr +𝛥𝑣 + +
𝑣o
𝑣thr 𝑣i −
− −
– by using a high-resolution linear ADC and modifying its output in such a man-
ner that the desired response is obtained (with much fewer bits), at least approx-
imately.
The usual primary output of ADCs is comparator responses. Normally, the output of
each comparator is assigned a binary weight. In serial digitization, the weight would
depend (in addition) on the timing code derived from the synchronizing clock pulse.
Thus, the states L or H can be transferred into 0- or 1-bit with a weight that allows
generating a binary number. Conversion to any binary code, as needed, is of course
possible.
Problem
6.3. Does quantization introduce uncertainty?
expect from the numbering of the channels. Sometimes this slight difference must be
accounted for.
Problem
6.4. How many comparators are needed to build a pulse height single-channel ana-
lyzer?
6.1.2.2 Nonlinearity
The ideal transfer function for a linear ADC is linear for all values between zero and
maximum value. Integral nonlinearity is the worst-case deviation in the mapping of
all digital output codes from the straight line. Thus, the integral linearity provides a
measure of the closeness of the conversion to an ideal linear behavior. It should be
less than 1 LSB because otherwise the monotonicity of the conversion is jeopardized.
In ADCs based on principles that do not provide excellent linearity, the usual value
for the integral nonlinearity is given as less than ±0.5 LSB.
There are two choices of how to construct the nominal conversion line to which the
actual transfer curve must be compared:
1. use the zero point and the full-scale-point for the definition (in accord with a
straightforward interpolation), or
2. find a straight line by the least-squares method best approximating the transfer
curve.
The largest deviation of the actual conversion (or transfer) curve from the ideal line
divided by the full-scale value of the conversion is called the (fractional) integral non-
linearity of the converter.
The integral nonlinearity is cause of an overall distortion of the shape of the converted distribution
(spectrum).
The linear interpolation between the full scale value and zero is accompanied by the
interpolation uncertainty. Interpolation is allowed to collate an analog value wrongly
by up to ± one-half of an LSB. This is caused by the finite resolution of the binary
representation of the signal and is unavoidable.
In the case of an ideal conversion, each channel having a width of 1 LSB would
correspond to identically sized intervals of the analog data. Because of the nonlinear-
ity of the conversion, a change by 1 bit in the digital value corresponds to a different
size interval of the analog data, depending on the data value. This irregularity is called
differential nonlinearity.
Whereas the integral nonlinearity is of importance for individual (independent)
data, the differential nonlinearity plays a role mainly for spectra (distributions) be-
cause differential nonlinearity affects the apparent frequency of the nominal channel
6.1 General properties of digitizing | 333
numbers. This results in a channel-wise distortion of the spectrum. To keep this dis-
tortion small, the differential nonlinearity must be small. The appropriate specifica-
tion of the differential nonlinearity is, therefore, given in percent of the mean channel
width. For some type of converters, a ±50% differential nonlinearity (expressed flat-
teringly as ±0.5 LSB) is common. Just consider that such a “narrow” channel occurs in
the center of a peak in a pulse-height spectrum. The digital distribution would show
two peaks instead of one! Therefore, the differential nonlinearity of spectral devices
should be less than 1% to avoid crass distortions.
When grouping 𝑛 data channels into one wider channel, the effect of the differen-
tial nonlinearity decreases with √1/𝑛, under the condition that the differential nonlin-
earity of one channel is independent of the next one which will not be the case for all
types of converters.
Example 6.1 (Distortion of a flat time spectrum due to differential nonlinearity). Let us
assume that 104 random signals/s are recorded in a detector and that the time distance
of each of these signals from a preceding reference signal (with a frequency of 100 kHz)
is measured with a device having a time resolution (channel width) of 10 ns and a
range of 1000 channels. After 1000 s, the mean frequency in each channel would be
10 000 with a statistical uncertainty of ±100. The frequency spectrum would be flat
with a superposition of a statistical ripple of 1% (r.m.s.).
If the width of one specific channel would be smaller by 5% (and for compensation
some other channel wider by 5%), it would result in a dip of 500 counts below the flat
spectrum at this channel (and a peak of 500 counts above the flat spectrum at the other
position).
Differential nonlinearity in the data conversion affects the shape of the converted distribution.
6.6. Does differential nonlinearity badly affect a single measurement, e.g., a voltage
measurement performed with a digital multimeter (DMM)?
Fig. 6.4. Measured distortion in a portion of a flat amplitude distribution because of excessive dif-
ferential nonlinearity.
The following example shall visualize how the sliding-scale method improves dif-
ferential linearity.
Example 6.2 (Correcting a measurement done with a faulty measure). During the first
term at the university, the vital statistics of the freshmen is taken. For measuring the
height, a standard measure is used. Although the marking on standard measures is
typically done within ±0.01 cm, let us assume that the mark for 172 cm is at 171.8 cm.
Thus, when sorting the height of students into class widths of 1 cm, there will be a
differential nonlinearity of ±20% affecting heights between 171 and 173 cm. There
will be a surplus of 20% in the class from 172 to 173 cm, and 20% are too few in the
class from 171 to 172 cm. Under the assumption that the minimum height of students
is 152 cm and the maximum height is 192 cm, this measure is now moved up and
down by as much as 20 cm. The amount of displacement is chosen statistically and
is recorded so that the reading can be corrected for the displacement. This way it is
assured that, on average, the same percentage of readings is faulty in each class, so
that the effect of differential nonlinearity is (statistically) nullified.
The same averaging principle is applied in ADCs with insufficient differential linearity.
The input voltage is increased with a random but known analog voltage. After conver-
sion, the digital equivalent of the added voltage is subtracted, thus, cancelling the
added signal in the digitized result. The advantage is that each conversion takes place
at a random place of the conversion curve.
Problem
6.7. Is it wortwhile to apply the sliding-scale principle, when just a few voltage values
are measured, e.g., with a DMM?
6.2 Direct digitizing of time difference | 335
A serial time-to-digital converter (TDC) uses the period length of a known frequency as
quantization unit (LSB). The number of period lengths within the unknown time inter-
val is counted to get the digital equivalent of that time interval. As time is sequential,
no shorter conversion is possible even when using one of the nonserial approaches.
This is different for direct voltage-to-digital conversion (Section 6.4). Figure 6.5 shows
such straightforward time digitizing.
The timing diagram of Figure 6.5 is self-explaining. The leading edge of the time
interval to be converted generates a start signal, the trailing edge of a stop signal. The
start signal opens the gate of the counter, whereas the stop signal terminates the gating
signal. During the gate open signal pulses from a highly stable clock reach the counter
and get counted. The output voltage of the counter is a number of clock signal that,
after being counted, supply a binary number of counts, i.e., the digitized time length.
Good linearity requires a clock with a highly constant frequency so that all
bits converted from individual clock periods are equally mapped from their ana-
log counter-part. The best frequency constancy in electronic oscillators is obtained by
quartz oscillators (Section 4.2.1.4). The high 𝑄 value of such a circuit does not allow
synchronizing its frequency. Consequently, neither the start nor the stop signal will
coincide with a clock signal which is the cause of digitizing errors, i.e., both at the
beginning, and at the end there will be a time distance of less than a quantization unit
that is not accounted for. Thus, the result is too small. On the other hand, the result is
too large as the quantization unit is a full period, but the counted pulses are shorter.
This effect makes the result too large by up to one unit, an effect that easily can be
corrected.
The smaller the absolute digitization error, the smaller the quantization step is,
i.e., the higher the clock frequency. But it is present, independent of the clock fre-
quency. Ways to overcome the limitation of the resolution by the clock frequency are
based on interpolation (Sections 6.2.1 and 6.2.2).
Clock
Gate
Gated Pulses
𝑡start 𝑡stop
If the time interval is repetitive with a repetition frequency that is not synchronous
to the clock, frequency averaging leads to an improved resolution. As, under this con-
dition, the start and the stop signals are random with regard to the clock signals aver-
aging 𝑛 digitized results improves the resolution by √𝑛. Aside from resolution, dead
time, minimum height, and minimum pulse width of the input signals are important
specifications of a TDC.
The dead time determines an upper limit of the repetition rate of the input signals.
Pulse height and width must provide enough charge to charge the input capacitance
beyond the threshold of the input voltage.
Problem
6.8. Name the limitation that occurs when time differences are directly digitized.
Applying one of the two methods in Sections 4.5.1 and 4.5.2.1 allows to increase the res-
olution of TDCs by digitizing the missed portions at the beginning and the end of the
time interval. In Section 4.5.1, a time interval is converted into a pulse height by a time-
to-amplitude converter (TAC) which can then be digitized by an ADC (Section 6.4). As
no significant bits are involved, the reduced precision in the conversion is of little im-
portance.
In Section 4.5.2.3, expanding of time lengths was dealt with. Expanding the
missed portions by some factor and using the same clock frequency improves the
resolution by just this factor.
Problem
6.9. Name the two methods by which the resolution of TDCs can be increased.
The vernier method as used with a caliper allows a higher resolution (usually by a
factor of 10) in a length measurement by comparing the reading on a sliding sec-
ondary scale with that on the indicating scale. The indicating scale is so constructed
that when its zero point coincides with that of the sliding scale (which has slightly
larger spacing of the divisions than those on the data scale) just the last division of
the sliding scale coincides with a division on the data scale. Normally 10 divisions
of the indicating scale would cover nine divisions of the sliding scale increasing the
resolution by a factor of 10.
The same principle is applied (twice) to improve the resolution by digitizing the
portions at the beginning and the end of the time interval that are shorter than one
period of the clock signal. The role of the indicating scale is played by the frequency
6.2 Direct digitizing of time difference | 337
Signal:
𝑇
0 1 2
𝑇S 𝑇C clock/reference oscillator
10 ns 0 1 2 3
Vernier 1: 𝑁C = 2
𝑇E 𝑁S = 3
11 ns 0 1 2 𝑁E = 2
Vernier 2:
11 ns
Fig. 6.6. Vernier principle applied to time lengths. (Observe that the geometric time lengths are not
to scale.)
of a highly stable clock (reference oscillator). The role of the sliding scale is played
by the frequency of an oscillator that has a slightly lower frequency and is started at
the moment of the start signal. Its 𝑄 value must be rather low so that a prompt start
is possible. However, the conversion quality of the resulting low binary bits need not
be high. The simple numerical example using Figure 6.6 should be helpful in under-
standing the principle.
The resolution of a 100 MHz digitizer shall be improved by a factor of 10. Thus, a
90.9 MHz startable oscillator must be added to give a pulse period of 11 ns available
for the vernier action. The start signal starts the vernier oscillator. The highly constant
clock oscillator is permanently oscillating providing a pulse period of 10 ns. In the fig-
ure, the pulses of both oscillators coincide after three periods of the vernier oscillator
making the number 𝑁S of start oscillations 𝑁S = 3. From now on, the pulses 𝑁C of the
clock oscillator are counted. From the beginning at the instant of the stop signal the
number 𝑁E of the vernier oscillations is counted until a coincidence with pulses of the
clock is detected ending both the counting of the vernier counter (with 𝑁E = 2) and the
clock counter (with 𝑁C = 2). As the stop counter counts towards the end of a period, its
counts must be subtracted. The numbers counted by the vernier must be multiplied by
1.1 to make them compatible with the 10 ns period length of the clock. Thus, the length
of the time interval 𝑇 with improved resolution is 𝑇 = (𝑁C + 1.1𝑁S − 1.1𝑁E ) × 10 ns
(31 ns in this example). The resolution was effectively increased from 10 to 1 ns.
Problem
6.10. Get hold of a caliper to study the vernier principle.
338 | 6 Manipulations of signals (digitizing)
Frequency is measured the same way as a time difference except for the reversal of
the variables. Here, the unknown frequency is measured by counting the periods over
a fixed time span provided by a timer by means of a fixed frequency, e.g., by digital
rate meters. The frequency signals are shaped, e.g., by means of a Schmitt trigger (Sec-
tion 4.1.2.1) and then fed coincident (Section 4.4.2) with the timer signal into a counter
(scaler). The coincidence unit allows the passage of signals during a selected (digi-
tal) time interval so that the digital value of the frequency is obtained as a ratio of the
contents of the scaler over the value of the selected time interval.
At low count rates, it takes too long to determine the frequency by counting the
periods per time. In this case, the frequency is digitized indirectly by measuring the
time length of one period (Section 6.2). In the case of a constant frequency, the un-
certainty of the digitized answer can by reduced by averaging (measuring the same
frequency a number of times, or by applying a longer time base) effectively increas-
ing the resolution (at the cost of conversion time, as usually is the case). Although this
digitization method is straightforward, and, therefore, generally used, it does not con-
stitute a direct digitization. A direct frequency-comparing method would have to tune
a reference voltage-controlled oscillator (VCO) (Section 4.5.3) digitally until agreement
in the frequencies is observed. This can only be done serially because of the sequential
nature of a frequency.
Example 6.3 (Any frequency is analog). As frequencies are measured by counters and
the output of counters is an integer number, it does not surprise that a frequency is
often not recognized as analog quantity.
Let us take a crass example. Why is the frequency with which the display of the
seconds switches in a digital watch analog? One has to go back to the definition of a
frequency (of a repetition rate). It is events per second. The result of counting is digital.
However, time is an analog quantity making the ratio, the rate, analog. Even if the
rate on the watch is exactly 1/s, the second on this watch and on any watch or clock
is not exactly 1 s long. This can be expressed by some uncertainty of the time length
that also enters in the uncertainty of the frequency making the result, not 1/s (which
would be digital), but, e.g., the analog quantity (with 𝑑 some decimal digit between
1 and 9) (1.0000 ± 0.000 𝑑)/s. A digital (binary) number as a multiple of the LSB has
NO uncertainty. Therefore, frequency cannot be digital.
Note: To measure a frequency by counting, one has to count the number of com-
plete periods (of length 𝑇) per time (i.e., in a time window of the known length).
Uncertainty of the length of the time window The calibration uncertainty is im-
plicitly given as ±1.0 × 10−4 , thus, the scale uncertainty is ±1.0 × 10−4 s. The time
jitter (Section 4.4.1) can be expected to be < 10−8 s and is, therefore, negligible.
Counting uncertainty The beginning of the time window will not coincide with the
beginning of a period, nor will the two ends coincide. Consequently, there will be
digitizing uncertainties. If the beginning of the period is given by the rising slope of
the rectangular signal, then the end will be given by that of the consecutive signal.
Under such conditions, the time window is effectively longer by half a length of
a period 𝑇 (because each incomplete period at the end of the time window has a
chance to be counted) with a digitizing uncertainty of ±0.50𝑇. At the beginning
of the time window, the situation is not symmetric. There, only a fraction of the
incomplete periods given by the duty factor 𝑑 can be counted. Therefore, at the
beginning, the time window is effectively extended by 0.50×𝑑×𝑇 with a digitizing
uncertainty of ±0.50𝑇.
So the scale uncertainty is ±1.0 × 10−4 , i.e., ±5.04 periods, the digitizing uncertainty
is ±√2 × 0.50/√3 periods, i.e., ±0.41 periods and the correction factor 𝑓corr =
(1 + 𝑑) × 0.50 × 𝑇 for the leakage of incomplete signals becomes 1.04 × 10−5 . There-
fore, 0.52 ± 0.41 periods were counted too many, reducing the counted result to
(50 435.5 ± 5.1) Hz. Increasing the length of the time window does not markedly
reduce the percentage uncertainty because the scale uncertainty dominates.
Any digitizing process introduces uncertainty, at least a rounding uncertainty due to the limited
number of digits used.
Problem
6.11. Is frequency an analog or digital quantity?
When an analog quantity has been converted into a (binary-coded) number, it will
often be necessary to store this number for later use. The later use of sampled infor-
mation will often be the reconstruction of the time dependence of the analog informa-
tion (mostly much later, e.g., in audio and video application which require long-term
nonelectronic storage, but this is not the subject of this book).
When concentrating on nonsampled binary-coded information, it is often of inter-
est to know how often a certain number (binary code) was obtained during some time
interval, i.e., the count-rate spectrum (or frequency spectrum, as called in statistics)
340 | 6 Manipulations of signals (digitizing)
of each binary code is to be known. For this end, a two-dimensional binary memory
is needed to correlate the occurrence number with the specific digitized numbers, the
address numbers in the memory. The address number is usually defined by the posi-
tion of the storage cell(s) in space (parallel logic), or in time (serial logic) as defined
by the corresponding code (protocol).
A memory is a physical (electronic) device that stores binary numbers (codes)
(temporarily) for later use. A random-access memory (RAM) is needed to allow access
to any address at high speed (usually provided by semiconductor electronics, e.g.,
bistable storage cells). The term memory is usually associated with the semiconductor
RAM, i.e., integrated semiconductor circuits. The other (slower) not-fully-electronic
storage is done with the so-called storage devices.
A semiconductor memory is organized into memory cells (e.g., flip–flops) with
their states (L or H) used as binary information (0 or 1 when the logic is positive).
Memory cells are combined into words of fixed length that determine the full scale,
the MSB. Each word is assigned a binary coded address of 𝑛 bits (also called chan-
nels), making it possible to store and access randomly 2𝑛 words in the memory. Often
memories are volatile, i.e., electric power is required to sustain the stored information.
Memory technology is still progressing fast so that describing a status is not meaning-
ful. Besides it is not the subject of this book.
To store digitized information in single-(or multiple-) task digital systems, not as
much flexibility as in general-purpose computers is required. Thus, the location of the
digitized information (the channel number) is correlated with the physical location on
the memory hardware. Thus, no complex memory management is necessary speeding
up the storage process.
Problem
6.12. What is the difference in the units: signals per second and Hz?
6.3.1.1 Multiscalers
In the multiscaler mode, consecutive counts (signals representing the same kind of
digitized information) are used to increment the contents of a counter assigned to the
appropriate channel one by one. This allows the processing of higher data rates, as
to each signal only the dead time of a counting process applies and not memory dead
time connected with the incrementation process of a memory cell. Thus, a multiscaler
performs the task of many scalers counting digital signals each of which represents
some specific digitized property coded by way of the address of each scaler. In partic-
ular, the term multiscaler is used for a device that determines a count-rate dependence
by assigning the address numbers one by one to consecutive counter dwell times of
constant length.
6.4 Direct digitizing of amplitude | 341
Problem
6.13. A computer-based multiscaler assigns an address to each scaler. Will the dead
time of such a device be determined by the dead time of each scaler or by the dead
time stemming from the assignment of the scaler in use?
As discussed in Section 6.1, one must distinguish between ADC circuits that perform
a direct conversion of an analog signal into a digital one, and ADC devices that first
make an analog-to-analog conversion and then convert the converted analog signal
to a digital output variable. The latter are called composite ADCs (Section 6.6). For
most directly comparing ADCs the reference voltage is supplied by a DAC, a digital-to-
analog converter.
Figure 6.7 gives the ideal response of a 3-bit DAC. When the binary input is changed
from 0 to 7 in equal time intervals, the voltage output should be in the form of an ideal
staircase ramp with steps of equal width and equal height.
342 | 6 Manipulations of signals (digitizing)
7/7 𝐹𝑆 7/7 𝐹𝑆
6/7 𝐹𝑆 6/7 𝐹𝑆
5/7 𝐹𝑆 5/7 𝐹𝑆
4/7 𝐹𝑆 4/7 𝐹𝑆
3/7 𝐹𝑆 3/7 𝐹𝑆
2/7 𝐹𝑆 2/7 𝐹𝑆
1/7 𝐹𝑆 1/7 𝐹𝑆
0/7 𝐹𝑆 𝑡 0/7 𝐹𝑆 𝑡
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
digital input digital input
Fig. 6.7. Response of an ideal 3-bit DAC to digital Fig. 6.8. Effect of the zero-offset error (dotted
inputs from 0 to 7 that arrive in consecutive con- line) and scaling error (dashed line) on the con-
stant time intervals. The straight line is the ideal version curve.
conversion curve.
7/7 𝐹𝑆 7/7 𝐹𝑆
6/7 𝐹𝑆 6/7 𝐹𝑆
5/7 𝐹𝑆 5/7 𝐹𝑆
4/7 𝐹𝑆 4/7 𝐹𝑆
3/7 𝐹𝑆 3/7 𝐹𝑆
2/7 𝐹𝑆 2/7 𝐹𝑆
1/7 𝐹𝑆 1/7 𝐹𝑆
0/7 𝐹𝑆 𝑡 0/7 𝐹𝑆 𝑡
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
digital input digital input
Fig. 6.9. Example of an error due to the integral Fig. 6.10. Example of an error due to the differen-
nonlinearity of the actual conversion curve. tial nonlinearity of the actual conversion curve.
Figure 6.8 shows how a zero-offset and a scaling error affect the conversion curve.
Figure 6.9 illustrates the difference between the actual and the ideal conversion curves
due to integral nonlinearity (Section 6.1.2.2). Figure 6.10 illustrates the difference be-
tween the actual and the ideal conversion curve due to differential nonlinearity (Sec-
tion 6.1.2.2).
The example of Figure 6.10 shows that monotonicity is not given any more if a
differential nonlinearity error of 1 LSB is present. The output voltage for the number 4
6.4 Direct digitizing of amplitude | 343
is only 3 voltage step units so that it is smaller than the voltage equivalent of 3 which
is 3.5 voltage step units (off by −0.5 LSB). This is avoided by the manufacturer of ADCs
and DACs by allowing a maximum differential nonlinearity of not more than ±0.5 LSB
(or ±50% which is gruesome but more honest). Such a specification is better than a
warranty of monotonicity.
Problem
6.14. A straight use of DACs is in displaying digital data. Name two examples of such
displays.
27
26
25
24 DAC +
23
22
21 𝑣o
20 Fig. 6.11. Symbol of a parallel digital-to-
− amplitude converter. A binary coded input
signal is converted into signal amplitude.
344 | 6 Manipulations of signals (digitizing)
MSB 𝑣2 𝑅
22 𝑖2
virt. ground 𝑅
𝑣1 2𝑅 0V F
1
2 𝑖1
𝑖1 + 𝑖2 + 𝑖3
−
𝑣0 4𝑅 +
20 𝑖0 +
LSB 𝑣o
−
Fig. 6.12. Principle of a three bit parallel digital-to-amplitude conversion using graded resistors for
achieving a binary-weighted current input.
MSB
22
21
20
LSB
2𝑅 2𝑅 2𝑅
2𝑅
2𝑅 𝑅 𝑅
−
+
+
𝑣o
−
Fig. 6.13. Principle of a 3-bit parallel digital-to-amplitude conversion using an R/2R network for
generating input currents with the correct binary weight.
make in this example 𝑣omax = 7 V, i.e., the output voltage for 1 LSB would be 1 V.
Thus, with the help of 𝑅F the span of the output voltage can be adjusted.
For this circuit, it is necessary that all logical high states are stable and equal in
amplitude and that the logical low states are zero. Another difficulty is the availability
(and cost) of these graded resistors. Just think of building a 21-bit DAC according to
this principle. The resistor for the LSB would be 220 -times (i.e., ≈ 106 -times) larger
than that of the MSB, and in addition, the uncertainty in the resistance value of the
latter must be less than 10−6 . Obviously, a different input current adding network is
necessary.
Binary weighting with only two kinds of resistor values is achieved by the so-
called R/2R-ladder network as shown in Figure 6.13.
By applying the superposition principle and Thevenin’s theorem for each bit in-
put, the equivalence of the R/2R network with the network of graded resistors can be
6.4 Direct digitizing of amplitude | 345
shown with some calculational effort. The differential nonlinearity based on the un-
certainty in the resistor values is much reduced because in the case of the same value
resistors not the absolute tolerance of the resistors counts but their relative deviations.
As resistors of the same production batch differ very little in their values, the relative
deviations will be small. Very similar considerations apply for the production as inte-
grated circuits. The disadvantage of the R/2R-ladder network is the increased parasitic
capacitance which increases the response time.
With high resolution DACs the current connected with the conversion of very small
numbers is so low that charging the unavoidable stray capacities takes too long, in-
creasing the conversion time. Boosting the currents connected with low bits by a con-
stant current before conversion and subtracting its equivalent binary value from the
converted voltage makes the circuit faster without jeopardizing the total precision of
the conversion.
Problems
6.15. The full scale output voltage of an 8-bit DAC is 5.0 V. What is the step voltage for
1 bit?
6.16. The output range of a DAC is from −10 V to +10 V. How many bits are necessary
to resolve 10 mV?
6.17. The output range of an 8-bit DAC is 10 V. To which voltage will the following two
weighted binary codes be converted?
(a) 001001012
(b) 000101012
23
22
DAC +
21
In
D Q D Q D Q D Q 20 𝑣o
Q Q Q Q
Clock −
Fig. 6.14. Block diagram of a serial input digital-to-amplitude converter. Serial binary logic signals
are converted into an analog signal.
346 | 6 Manipulations of signals (digitizing)
An alternative is a DAC with an output by way of a staircase ramp where the voltage
of the uppermost step corresponds to the number of individual input signals recorded.
The active diode pump circuit (Section 4.5.4) is such a staircase ramp generator, i.e.,
it performs a serial digital-to-voltage conversion. The digital input is a string of clock
pulses. The output amplitude is increased stepwise for each clock signal at the input,
i.e., the instantaneous output voltage depends on the number of pulses that have been
fed into the input.
Obviously, such DACs are slow because they must produce a step for each input
signal. Thus, any combination with a moderately fast ADC is counterproductive. How-
ever, it could be used in a serial ADC (Section 6.4.2.3).
A genuine serial DAC has much better differential linearity (Section 6.1.2.2) than a
parallel DAC because all the steps are produced the same way. The step size depends
mainly on the amplitude (and rise time) of the clock signal and on the stability of the
voltage division by the two capacitors. Using capacitors of equal properties, the latter
contribution should not matter.
Problems
6.18. Which property makes a true serial DAC superior to other types of DACs?
6.19. Why might the conversion by serial DACs be impractical when the bit number
is high, e.g., 12 bit?
Table 6.1. Comparison of the characteristics of the five “pure” types of voltage comparing ADCs
when converting a full-scale (FS) signal. The resolution range is 2𝑛 , the number of time steps is 𝑡,
and the number of comparator decisions is 𝑠.
Rather than increasing the voltage step size of the reference signal the input signal
may be reduced by the equivalent amount in each step.
Problem
6.20. Name the advantage of voltage comparing ADCs.
+
𝑉ref +
−
𝑅 +
−
8-line to 3-bit
𝑅 + priority encoder
−
8
𝑅 7 binary output
+
6 22
−
5
𝑣i 21
𝑅 4
+
3 20
−
2
𝑅 1
+
−
𝑅 +
−
𝑅 +
−
𝑅
−
Problem
6.21. What is the usual method to counterbalance the rather bad differential linearity
of parallel ADCs?
produced by a DAC (digital–analog converter) from the value of the (more) significant
bit(s) determined in the previous step(s). Thus, a 16-bit ADC can be obtained by using
an 8-bit parallel ADCs in two steps having only twice the conversion time of a sin-
gle parallel ADC. The highly advanced pocket multichannel analyzer MCA8000D of
Amptec Inc. uses a 100 MHz pipelined flash ADC. It acquires 16 bits every 10 ns, i.e.,
the ADC conversion time is 10 ns. The system dead time is governed by the minimum
rise time of the input signal which is > 500 ns. The peak detection is done digitally,
requiring only a few clock signals to find the peak. The digital peak detect function is
preceded by an analog shaping network. Since the processing is only peak detection,
it can be done at the ADC clock rate. It is not really done in a single clock pulse but
the processing is pipelined with the clock rate. As the differential linearity of paral-
lel ADCs is mediocre at best, the sliding-scale method (Section 6.1.2.3) is applied to
achieve a differential nonlinearity of < 0.6%. However, the sliding scale is not imple-
mented after each conversion but this is done at a reduced rate. Concluding remark:
there is, of course, no obligation to apply the same conversion principle in each serial
step. Subranging (coarse) parallel conversion with a successive-approximation ADC
for the finer conversion has been realized.
Problem
6.22. Which method must be applied to obtain a 16-bit ADC with minimum conver-
sion time?
Counter DAC
20 20
21 21
𝑓ref 22 22
23 23
24 24
25 25
26 26
Enable 27 27
Register
+ D1 Q1
D2 Q2
+ − D3 Q3
D4 Q4 binary
D5 Q5 output
𝑣i
D6 Q6
D7 Q7
− D8 Q8
Strobe
ister to take over the output of the counter, thus, updating the ADC circuit’s output;
and the counter will receive a signal on the R (reset) input, causing the counter to re-
set (to resume the start value zero). Obviously, the conversion time depends on the
amplitude of the input signal. This variation is not acceptable for applications requir-
ing a constant sampling frequency. Also the long (average) conversion time places the
serial ADC at a disadvantage to other ADC types. The serial ADC is in direct competi-
tion with the Wilkinson ADC (Section 6.6.1). Its advantage is that the stability of the
clock frequency does not matter, its disadvantage that the absolute shape of the input
signals must not change. Both the circuit complexity of these two ADCs and their spec-
ifications are comparable. However, only Wilkinson ADCs are available in monolithic
form.
Problem
6.23. On which circuit is a serial ADC fully dependent?
response gives the coarse bit of the amplitude. In the next step, this bit is scanned
with higher resolution to determine a lower bit that collates to the amplitude. This is
continued until in the last section the unit step size corresponding 1 LSB is reached
and the conversion is finished. By using 𝑛 sections for a resolution of 2𝑛 a “ramp” of
single steps of decreasing size at each step is produced. This limiting case has gained
importance under the name successive-approximation ADC.
Concluding remark: there is, of course, no obligation to apply the same conver-
sion principle for each section. Mixing serial coarse conversion with the parallel or
successive-approximation principle for the finer conversion is feasible.
Problem
6.24. A 16-bit sectioned serial ADC is to be built in two sections. What section size is
optimal?
HHH HHH
HHL HHL
HLH HLH
HLL HLL
LHH LHH
LHL LHL
LLH LLH Fig. 6.17. Decision tree of a 3-bit successive-
LLL approximation ADC.
Successive
approximation
processor DAC
20 20
C 21 21
22 22
up/ 23 23
4
down 25 24
2 25
26 26
End 27 27
Register
+ D1 Q1
D2 Q2
+ − D3 Q3
D4 Q4 binary
D5 Q5 output
𝑣i
D6 Q6
D7 Q7
− D8 Q8
Strobe
Note that the conversion time for this ADC lasts 𝑛 steps, i.e., the length of all sam-
pling intervals is independent of the signal amplitude, by contrast to the serial ADC
circuit. However, the linearity is poor. Nonlinearity arises from accumulating errors in
the digital-to-amplitude conversion. For example, the amplitude difference between
the converted value of 2𝑛 and 2𝑛 − 1 should be the amplitude equivalent of 1 LSB. In
the present case the difference in the amplitude of the 𝑛th bit and the sum of all ampli-
tudes of the bits smaller than 𝑛 should be 1 LSB. If the conversion of each bit is done
with an uncorrelated (independent) uncertainty of 1%, the differential nonlinearity
for the binary value just below the 8th bit would add up to 1.48 LSB. To reduce the
differential nonlinearity below 0.5 LSB it would be necessary to convert each bit with
an uncertainty of about 0.3%. This example is insofar typical as the largest differen-
tial nonlinearities occur for values just below each completed 2𝑛 code. If the system is
stable, these localized differential nonlinearities can be empirically corrected for.
6.5 Differential ADCs | 353
Problem
6.25. A signal to the input of a 10 bit successive approximation ADC is exactly one-
half of the full range. How many comparator responses are required to digitize said
signal?
Digitizing just the change that has occurred since the last sample is called differential
sampling. Digitizing only the difference saves time so that this differential digitizing
has its merits by an increased sampling rate. The term delta in the name of an ADC
indicates that only the change between successive samples is processed.
For sampling applications, the serial ADC can be made more efficient for tracking con-
tinuous input amplitudes. This variation is called tracking ADC (Figure 6.19). As long
as the comparator messages that the input signal is higher than the reference voltage
provided by the output of the DAC, the up/down counter of the clock pulses is counting
up. As soon as the input voltage gets below the reference voltage, the counter switches
to the count down mode reducing the reference voltage (the output of the DAC). When
the DAC voltage is again lower than the input voltage, the counter switches back to the
count-up mode increasing the DAC voltage. Thus, each sample is not accompanied by
a DAC output staircase ramp starting at zero but just a differential ramp going either
Counter DAC
20 20
C 21 21
22 22
up/ 23 23
4
down 25 24
2 25
26 26
27 27
+
+ −
binary
output
𝑣i
If a device does function as an ADC but is not based on a single (electronic) ADC circuit
but on the combination of at least one analog and one digital (binary) circuit, one has
a composite ADC. Again, ADCs that use a combination of electronics and other tech-
nologies (e.g., optical or mechanical encoders) are disregarded. Only purely electronic
devices are considered.
+ 𝐶
+ −
−
𝑣i + Register
Counter
𝑅
20 D1 Q1
− C 21 D2 Q2
22 D3 Q3
23 D4 Q4 binary
24 D5 Q5 output
−𝑉ref
25 D6 Q6
26 D7 Q7
Clear 27 D8 Q8
Strobe
scopic circuits working with frequencies in excess of 300 MHz, this frequency has re-
mained the maximum digitizing frequency of this type of ADC over decades despite
improvements in integrated circuit technology. As each channel width is correlated to
one full oscillation of a highly stable harmonic oscillator (e.g., a crystal oscillator) it
is not surprising that the differential linearity is inherently much better than that of
voltage-comparing ADCs in particular those that depend on the rather poor differen-
tial linearity of a parallel DAC.
Figure 6.20 shows a basic schematic diagram of a Wilkinson ADC. A positive in-
put voltage is compared by means of a comparator to the ramp voltage across the in-
tegrating capacitor 𝐶 of a transimpedance amplifier. The constant charging current
is supplied through a resistor 𝑅 by a negative reference voltage 𝑉ref . When the out-
put voltage of the integration amplifier equals the input voltage, the comparator goes
into the high state. This high state disrupts the flow of the precision clock signals to
the counter and discharges the capacitor via the FET switch. The state of the counter
outputs is taken over by a shift register to be available as a converted digital output.
When the capacitor is fully discharged the integrator output voltage reaches zero, the
comparator output switches back to the L state, clearing the counter and enabling a
new conversion process. The similarity with the voltage comparing serial ADC circuit
is quite artificial. Here an analog voltage ramp is used, there a binary. Here a highly
constant frequency signal is needed, there not at all. Here no DAC is involved; there the
DAC is essential for the operation. As the conversion is indirect (conversion of ampli-
tude into time and digitizing of the time) a calibration process is needed to determine
the analog voltage step size per LSB. This calibration is subject to aging affecting both
the amplitude-to-time conversion and the time digitizing. Consequently, from time to
time the calibration process must be repeated.
356 | 6 Manipulations of signals (digitizing)
Problem
6.27. Which drawback does the Wilkinson ADC have when compared to a serial
voltage-comparing ADC?
𝑣i 𝑁 𝑅
= − d × Su (6.1)
𝑉ref 𝑁u 𝑅Sd
which is the ratio of counted periods as a measure of the run-down time 𝑁d to those of
the (constant) run-up time 𝑁u times the ratio of the two source (=input) resistances.
As both counted numbers depend on the same clock, the first factor is hardly depen-
dent on the stability of the clock and using resistors with matched properties makes
the second factor stable, too. This way the above mentioned three dependences are di-
minished; it is a very stable arrangement. The circuit of such an enhanced dual slope
converter is sketched in Figure 6.21a. Figure 6.21b sketches the time dependence of the
converted voltage.
6.6 Composite ADCs | 357
𝑣o
𝑉ref
𝐶
SW2
𝑅Sd
−
𝑡
−𝑣i + 𝑡u 𝑡u + 𝑡d
SW1 + 𝑣o
𝑅Su
−
𝑡u 𝑡d
Fig. 6.21a. Principle of the amplitude-to-time Fig. 6.21b. Time dependence of the converted
conversion as used in an enhanced dual-slope voltage as used in a dual-slope ADC.
ADC.
The wideband noise in the circuit and the maximum output voltage of the inte-
grator (which is on the order of 10 V) limits the resolution of the dual-slope ADC. The
noise defines how precisely the zero crossing can be determined (about 1 mV). Thus,
the practical resolution of dual-slope ADCs is limited to about 14 bits.
Problem
6.28. Name instruments that take advantage of the high resolution and intrinsic sta-
bility of dual-slope ADCs.
Instead of using a single run-down resistor (i.e., a single slope) to completely discharge
the capacitor, the multislope converter uses several resistors (i.e., multiple slopes) in
sequential order to discharge the capacitor, each time more precisely. The ratio of the
values of the discharging resistors increases by the same factor (e.g., 10) in each step.
Figure 6.22a shows a multislope conversion circuit based on powers of 10. Four
resistors are used in this circuit, with weights of 103 , 102 , 101 , and 100 .
The time dependence of the converted signal as shown in Figure 6.22b helps with
the understanding of this circuit. The first (the steepest) run-down slope (using +𝑉ref
and 𝑅b /1000) is terminated at the end of the clock period at which zero-crossing was
registered. As the time of the zero-crossing will be earlier than the end of the period
358 | 6 Manipulations of signals (digitizing)
+𝑉ref
𝑣o
𝑅/1000
𝑅/10 𝐶
𝑅i
−𝑣i −
+
+
𝑅/100
𝑣o
𝑡
𝑅
−𝑉ref
−
runup rundown
Fig. 6.22a. Principle of a four-slope amplitude-to- Fig. 6.22b. Time dependence of the converted
time converter. voltage as found in a quadruple-slope ADC.
of the last counted signal, the capacitor will be oppositely charged at the end of the
counting period, i.e., the ramp goes negative. The amplitude depends on the time dif-
ference between the zero-crossing and the end of the counting period. Now a ramp
in the opposite direction is generated with a reduced slope (using −𝑉ref and 𝑅b /100)
due to the increased resistor value (by a factor of 10). Again, the zero-crossing will not
coincide with the end of the counting period; the next run-down slope with a further
reduced slope is started. After the next run-up slope, again with the reduced slope,
the conversion is finished. Each following slope determines the zero crossing of the
output voltage of the integrator ten times more precisely than the previous slope. The
attainable resolution is inversely proportional to the steepness of the slope. To get
the same resolution with just one run-down slope the run-down time would have to
be about 1000 times longer than that of the steep slope. Thus, the conversion time is
much reduced. Obviously, the number of counted periods must be weighted with the
weight assigned to each of the slopes. As at most one complete period contributes to
the voltage overshoot at the capacitor and the resolution is increased by a factor of ten
the duration of the last three slopes must be less than 10 periods each.
The present example of a multiple-slope ADC used a multiplicative factor of 10 per
slope to conform to the decimal system. An optimization with regard to the shortest
conversion time reveals that regardless of resolution the optimum multiplicative factor
would be Euler’s constant e.
Problem
6.29. Name the main reason for using multiple slope ADCs.
6.6 Composite ADCs | 359
− +𝑉
𝑅 𝐶
𝑅
−𝑣i −
+
+ D Q Output
−
is added to the input current reducing the absolute output voltage of the integrator
(negative feedback). Actually, the pulsed feedback signal adds to or subtracts from
the capacitor with each pulse having a definite amount of charge (charge balancing
method, Section 4.5.3). Thus, the number of pulses needed to remove the charge from
the capacitor (making the integrator’s output 0 V) is a measure for the size of the input
voltage.
The output of the flip–flop is a serial stream of L and H states, synchronous with
the frequency of the clock. For a zero input voltage the flip–flop output will oscillate
between H and L, as the feedback system tries to keep the integrator output at 0 V.
This circuit is very fast so that a high clock frequency may be applied allowing over-
sampling, i.e., using a sample rate much higher than the minimum requested by the
Nyquist–Shannon sampling theorem. Thus, there is a time margin for digitizing the
frequency of the H outputs, e.g., by counting the number of H signals during some
given number of clock pulses (in a given time interval). The binary output of this
counter is then the digital output of this ADC.
Problem
6.30. Is the delta–sigma ADC used for analyzing single pulses or for sampling a con-
tinuous input voltage?
There are two quite different applications of ADCs asking for quite different solutions
– sampling (of continuous signals), and
– digitizing of isolated signals (pulses).
Consequently, the requirements for an adequate conversion are widely different. The
most important properties to be considered are
– resolution (precision),
– speed (sampling rate),
– accuracy (linearity),
– step recovery, and
– complexity/cost.
An ideal ADC has as many bits as needed, has no dead time (samples at ultra-fast
speeds), has no nonlinearity, recovers from steps instantly, and uses only few elec-
tronic components. Such a device would satisfy all needs.
Each type of ADC has its strength and weakness. Table 6.2 gives some idea about
the ranking of various ADC types with regard to some of their properties. The upper
limit of the resolution of 20 bit (or more) is not so much given by the ADC itself as by
the practical limit enforced by the signal-to-noise ratio.
6.8 ADCs in measurement equipment | 361
Parallel 1 6 1
Tracking 2 4 5
Successive approximation 3 5 2
Serial 4 3 3
Wilkinson 4 2 3
Dual slope 5 1 4
Most of the properties may be improved through better circuitry, either by in-
creased number of components and/or by using especially fast or precise circuits. One
reason for the good ranking of the compound ADCs lies in the fact that the competing
voltage comparing ADCs depend on the quality of the DACs which they must use. How-
ever, precision DACs are not easily manufactured.
Problems
6.31. Which ADC type has the shortest dead time?
6.33. Voltage comparing ADCs use comparators which need reference (threshold)
voltages.
(a) How do parallel ADCs obtain these voltages?
(b) How are they obtained by the other ADCs?
6.34. To make the conversion time short, integrating ADCs use a high digitizing fre-
quency.
(a) For a long time now there has been no progress with regard to the highest external
frequency used in these ADCs. What is the approximate frequency limit?
(b) Name a reason why it is difficult to go beyond this limit.
Passive measurement devices (Section 2.1.4) do not need auxiliary power for proper
operation (they utilize part of the signal power). Active devices get their power either
from batteries or from mains. The main advantage of active meters is the increased
signal range by the use of a signal amplifier. Besides, the instrument is designed for
minimum loading of the circuit under investigation. Thus, only in rare cases a correc-
362 | 6 Manipulations of signals (digitizing)
tion of the measured values (Section 2.1.4) is needed. In special applications, the input
impedance of the instrument can be made independent of the input range in use. As
the result of any measurement is a number, it comes naturally that an ADC is included
in active measuring devices. Thus, the result is available in a numerical (digital) form
to be used in a display or for further electronic manipulations.
The name multimeter indicates that a multiple number of tasks can be performed with
such an instrument. The function of a voltmeter, an ammeter, and an ohmmeter are
usually combined in such an instrument. As such, a multimeter may be passive or
active.
In active instruments, additional functions (measurement of capacitance, induc-
tance, temperature) can easily be included. A measurement of the capacitance is eas-
ily performed by measuring the time it takes to charge the capacitance linearly (with a
constant current 𝑖S ) to a certain voltage level 𝑉𝐶 (Section 4.5.2). 𝐶 = 𝑖S /𝑉𝐶 ×𝑡 = 𝑐𝑠𝑡.×𝑡.
Thus, most active multimeters will need the following components:
– power supply;
– measurement amplifier;
– ADC;
– stable voltage reference for the ADC;
– set of calibrated resistors 𝑅 to convert input currents into voltages:
𝑣x = 𝑖x × 𝑅;
– set of calibrated current sources is to convert resistances into voltages:
𝑣x = 𝑅x × 𝑖S , and for the capacitance measurement;
– a stable oscillator, a comparator and a counter; and
– a display or a computer interface for the data output.
Traditionally, the main purpose of an oscilloscope is the display of the time depen-
dence of the voltage applied to its input. In analog oscilloscopes this is achieved by
using a cathode ray tube which displays the time dependence of the voltage.
Digital oscilloscopes are based on sampling (Sections 4.4.3.1 and 6.1) of the input
signal. The sampling rate depends on the desired time resolution. The amplitude of
the signal is digitized by means of an ADC and the binary result stored in a memory
for later use (display, all kinds of operations). The usual size of the display is of the
order of 10 cm × 10 cm. Thus, a visual resolution of somewhat less than 1% suffices.
An ADC of at least 8 bit provides such a resolution. Parallel ADCs allowing sampling
rates in access of 100 MHz are the choice for high-speed digital oscilloscopes. As all
sampling data are stored, numerical averaging and other computational techniques
can be used for improving the effective resolution.
The time resolution of repetitive signals is not tied to the speed of the ADC. Using
the moment of the trigger response as fiducial time mark the moment of sampling
is delayed by one timing unit for each consecutive signal. For each sampling event
a (consecutive) signal is used. The (binary) sampling results are stored in a memory
which is read after the scan has been completed.
The bandwidth of a digital oscilloscope is limited by the bandwidth of the input
amplifier and the Nyquist frequency (Section 6.1) of the sampling process. To observe
the latter is very important to avoid aliasing.
This feature is the only disadvantage of digital vs. analog oscilloscopes. The min-
imum rise time (of any electronic instrument) is determined by this bandwidth (Sec-
tion 3.3.1). If the signal rise time is comparable to the rise time of the instrument, a
correction for the instrument rise time is feasible in view of the quadratic addition of
rise times (Section 3.3.1).
The vertical sensitivity of an oscilloscope is changed by using attenuators or by
changing the gain of the input amplifier. At very high sensitivities, there might be a
reduced bandwidth due to the constant gain-bandwidth product (Section 3.6). The
horizontal resolution depends primarily on the sampling rate (given in samples per
second, S/s). It may be prescaled for a coarser resolution. When looking at slowly
changing signals over long periods of time a low sampling rate is mandatory.
An essential part of each digital oscilloscope is the data memory. The number of
memory cells determines the length of the record that may be stored, i.e., the number
of waveform points that may be taken for one waveform record. Obviously, there is a
trade-off between record length and record detail. There is either a detailed picture of
a signal over a short time interval (high sampling rate, the memory is filled quickly)
or a less detailed picture over a longer time interval.
364 | 6 Manipulations of signals (digitizing)
A spectrum analyzer performs Fourier analysis (see Section 3.1.1) of the time-depen-
dent (voltage) signal and displays the frequency spectrum of the said signal. For the
mathematical operations which are involved it is necessary that the time dependence
is available in the numerical form. This feature is basic in digital oscilloscopes (Sec-
tion 6.8.2). Therefore, advanced digital oscilloscopes offer as an option spectrum
analysis.
After repeatedly sampling the signal wave-form and filling a memory (at least
15 bit deep) with the recorded time samples in a FIFO (first-in, first-out) memory
(e.g., Section 5.4.1) this digital information is used to convert the time spectrum into
a (discrete) frequency spectrum. Obviously, the sampling frequency must exceed the
Nyquist frequency (the capture bandwidth) so that slow ADCs disqualify. To cover a
high dynamic range, the resolution must be adequate. A good linearity is needed to
achieve sufficient fidelity in the conversion process.
In digital signal processing, the discrete Fourier transform (DFT) converts any sig-
nal that varies over time and sampled over a finite time interval (the window function)
from the time domain to the frequency domain. The samples taken from the input are
real numbers converted to complex numbers, and the coefficients obtained as output
are complex, too (see Section 3.1.1.2). The frequencies of the output sinusoidals are in-
teger multiples of a fundamental frequency with a period corresponding to the length
of the sampling interval. As the input and output sequences are both finite the Fourier
analysis is that of finite-domain discrete-time functions. This discrete Fourier trans-
form is usually applied to perform Fourier analysis in many instruments. Since it deals
with a finite number of data, it can easily be implemented by numerical algorithms
in digital processors or by (hardware) programming of appropriate circuits. Usually,
these implementations are based on efficient fast Fourier transform algorithms which
are available for data samples with block sizes of 2𝑛 .
Solutions
Problems of Chapter 1
Problems of Chapter 2
2.1 The flow of current through an ideal current source in series with any other ele-
ment is fixed by the ideal current source. Since the voltage between the terminals of
an ideal current source is not defined (it is floating) the voltage across a series connec-
tion of an ideal current source with any other element can take any value. Any series
configuration with an ideal current source has the identical property as the current
source by itself.
2.2 The voltage across an ideal voltage source in parallel with any other element is
fixed by the ideal voltage source. The current flow through the ideal voltage source
can have any value. Any parallel configuration that contains one ideal voltage source
has the same electrical properties as the ideal voltage source by itself.
2.3 There is no difference in electronic behavior between a real linear voltage source
and a real linear current source.
2.4
10 mA 1 kΩ
+ 2
16 V −
𝑣o
3 mA 24 kΩ 𝑣o
− −
366 | Solutions
2.8
(a) Original and simplified circuit:
+
1 kΩ 𝑣1
−
+
1 kΩ 𝑣2
−
+
+
8V − 1 kΩ 𝑣3
+
−
4 kΩ 𝑣1-4
+
−
1 kΩ 𝑣4 +
8V −
−
+
+
1 kΩ 𝑣5
1 kΩ 𝑣5
−
−
(b) 𝑣5 = 1.6 V
(c) Equivalent circuit according to the theorem of Thevenin:
4 kΩ
+ 0.8 kΩ
8V −
+ ≡ +
+
1 kΩ 𝑣o 1.6 V −
𝑣o
− −
Solutions | 367
4 kΩ
+
8V −
+ ≡ +
1 kΩ 𝑣o 2 mA 0.8 kΩ 𝑣o
− −
𝑖S 𝐺S 𝐺L 𝑣L
−
𝐺1 𝑅2 1 1
2.10 = with 𝑅1 = and 𝑅2 =
𝐺1 + 𝐺2 𝑅1 + 𝑅2 𝐺1 𝐺2
2.11
(a) 𝑖sc = 𝑖S
(b) The equation for the open-circuit voltage 𝑣oc is dual to that of the short-circuit
current: 𝑣oc = 𝑣S
+
𝑣S
− 0.1 kΩ 1 kΩ 10 kΩ 100 kΩ
+
10 mA 10 V − 10 kΩ
2.15 No, both theorems are important. In practice real voltage sources appear more
frequently than real current sources and thus one might be tempted to do all analysis
by the “voltage view.” The “current view,” however, is appropriate just as often as the
“voltage view.”
2.16 𝑖1 = 2.1 mA, 𝑖2 = 0.9 mA, 𝑖L = 3 mA
𝑅S1 𝑖1 𝑖2 𝑅S2
𝑖L
+
+ +
𝑣S1 𝑅L 𝑢L 𝑣S2
− −
−
2.22 No – the correction of < 1 pV is much smaller than the resolution of the instru-
ment of 10 nV.
2.23 Yes – the conductance of the instrument of 1.82 μS is even larger than that of the
circuit, therefore, the correction will be large and the uncertainty of the result as well.
2.32 No, the transformer as a two-port is symmetric with regard to input and output.
2.33 (a) 𝑔𝑖 = −1 (b) 𝑍o = 100 kΩ
2.54
(a) With 𝐵 = 2 − 1/𝐴.
(b) 𝐵 must be active to get 𝐵 > 1.
𝑅L 𝑔𝑖A
2.58 (a) 𝑔𝑖F = 𝑔𝑖A (b) 𝑔𝑣F = 𝑅F
× 𝑔𝑖A +1
2.59
(a) No.
(b) No.
2.71
(a) 𝑣o1 : parallel–series, 𝑣o2 : parallel–parallel
(b) 𝑣o1 : current gain, 𝑣o2 : transimpedance
(c) input impedance: low in both cases, output impedance:, 𝑍o2 low, 𝑍o1 medium
(≈ 𝑅L )
2.72 Negative.
2.73 (a) parallel–parallel (b) not at all
2.74 A negative input voltage will drive the differential amplifier into the cut-off region,
i.e. the positive input becomes (much) more negative than the negative input.
2.75 The closed-loop gain must be less than one, under all circumstances.
2.78 Because the effective configuration depends on the position of the input.
2.79 A change in biasing acts as a “DC-noise,” i.e. the same feedback configuration
as in the case of noise is effective.
2.80 Today’s semiconductor technologies are optimized for voltage electronics.
2.81 The output is floating, i.e. it is not grounded.
2.82 The closed-loop gain is a product of 𝐴 and 𝐵; it does not matter where 𝐴 is located
2.83 𝑔𝑖 is closer to one and the output impedance is higher.
Problems of Chapter 3
𝑡
3.1 𝑎(𝑡) = −1 + 2 𝑇p with 𝑡p > 0 and 𝑡p = 𝑡 − 𝑛𝑇
3.22 No.
3.23 It has zero conductance.
3.24 When the power supply is switched on the transient charges the capacitor.
3.25 At a given supply voltage the impedance can be higher; less power is consumed.
3.26 This transition is made of very high frequency components for which the impe-
dance of a capacitor is close to zero.
3.27 Current.
3.28 Because diodes behave like current check valves.
3.46
(a) one-half
𝑅 × 𝐶1 × 𝐶2
(b) 𝜏 =
𝐶1 + 𝐶2
Solutions | 373
𝜏
(c) 𝑔𝑣 (𝜔) = and 𝜑(𝜔) = arctan(−𝜔𝜏)
𝑅 × 𝐶2 × √1 + 𝜔2 𝜏2
3.62 Yes.
𝜋
3.63 1 − 4
= 12.3°
374 | Solutions
3.75 1000 V A
3.76 (a) 50 mW (b) 100 mW (c) 50 mW (d) yes – no
1
3.77 (a) 𝑅 = 5𝜋
Ω (b) 𝐺 = 20𝜋 μS
3.78 It depends on the feedback configuration.
1 + j𝜔𝑅2 𝐶
3.79 𝐺𝑣 (j𝜔) = 𝑅1
1 + j𝜔𝑅2 𝐶 × (1 + 𝑅2
)
1 + j𝜔𝑅2𝐶
3.80 𝐺𝑣 (j𝜔) = 𝑅2
1 + j𝜔𝑅2 𝐶 × (1 + 𝑅1
)
3.81
(a) Apply series–parallel positive feedback with a closed-loop gain of 0.9.
(b) 𝑡rs = 1 μs
(c) No.
3.82 There are three identical stages with 60° phase shift each; 20%
3.83 𝐵 < (1/25)3
3.89
– dynamic change of 𝑍F
– smaller 𝐵𝑊 results in less thermal noise
– tailored 𝐵𝑊 gives better feedback
Problems of Chapter 4
4.1 Analog.
4.2 Amplitude, phase, frequency, and signal length.
4.3 No.
4.4 The transfer function of a Schmitt trigger has a hysteresis.
4.5 Parallel–parallel.
4.39 Because it takes a frequency component of infinite frequency to form the step.
4.40 A discriminator deals with the analog information pulse-height, a trigger with
the analog information time.
4.41 It is noise in the position of a timing mark.
4.42 Leading edge triggers.
Solutions | 377
4.43 Bipolar.
4.44 Very probably it will retrigger, i.e., it will deliver multiple output signals for one
input signal.
4.45 Time as analog information is preserved.
4.46 None.
4.47 Bothe circuit: the active elements are in series, the output current is identical
for all elements; Rossi circuit: the active elements are in parallel, the output voltage is
identical for all elements
4.48 Yes, predominantly.
4.49 Yes.
4.50 A linear gate retains the information on the amplitude of the input signal.
4.51 Yes.
4.64 Because it is used for both the signal generation and the measurement.
4.65 (a) Negative feedback increasing the capacitor dynamically. (b) Positive feedback
in a boot-strap circuit.
378 | Solutions
Problems of Chapter 5
5.1 No.
5.2 Resistor, switch, gyrator.
5.3 No, they are equivalent.
5.4 1010 = 119 = 128 = 137 = 146 = 205 = 224 = 1013 = 10102 = 11111111111
5.5 Parallel logic, used serially.
5.7 Power supply voltages, amplitudes of the H-state and the L-state, power consump-
tion, speed – propagation delay.
5.8 Schottky type.
5.9 They use common base and common collector transistor stages which have higher
bandwidth due to negative feedback.
5.10 Negligible quiescent power consumption – increased propagation delay.
5.11 None.
5.14 Invert inputs and output by NOR gates used as NOT circuits.
A B Q
5.15 L L H
H H L
5.26
(a) If the shift occurs toward higher order bits it is multiplied.
(b) The highest order bit must be 0 before the shift, and 0 must be entered to the LSB
during the shifting process.
5.27 Serial data transmission takes less hardware but takes longer.
5.28 5
5.29 It takes more than twice the number of flip–flops.
5.30 Only one bit is active at any time, the current load of the power supply stays the
same for each number.
5.31 0110
5.32 Only half as many flip–flops are needed.
5.33 It is not possible to count statistically arriving pulses without loss.
5.34 The higher a bit is, the later it will be switched.
5.35 Yes.
Problems of Chapter 6
6.6 No.
6.7 No.
6.8 The time resolution depends on the frequency of the reference oscillator.
380 | Solutions
6.9 Expanding the unconverted time portion by means of time interval expansion, or
applying time-to-amplitude conversion of these portions.
6.11 Analog.
6.12 Hz is the unit of a regular (sinusoidal) frequency.
6.13 Depending on the count rate the importance of either contribution to the dead
time will vary.
6.14 Monitor, display of a digital oscilloscope.
6.15 20 mV.
6.16 11 bit
6.17 (a) 1.48 V (b) 0.84 V
6.18 Differential linearity.
6.19 Relatively long conversion time.
6.20 Their ratiometric behavior does not require a calibration of the pulse height re-
sponse by the user.
6.21 Sliding scale method.
6.22 Serial parallel ADCs, i.e., pipelined flash converters.
6.23 A serial DAC.
6.24 8 bit
6.25 10
characteristic 9 collector
– forward 56 – series resistor 207
– nonlinear 34, 105 collector resistor 93
– transfer 57, 67, 80 common mode 71–73, 108, 112
characteristics 62–64, 66, 115–117, 121, 147, common mode rejection ratio 108
151, 188, 267, 269 common-base circuit 47, 50, 69, 96, 182
charge balance VFC 289 common-collector circuit 47, 50, 69, 96, 97, 99,
charge balanced method 288 182
charge-coupled device 282 common-drain circuit 98
chatter 65, see also bouncing common-emitter circuit 47, 48, 51, 62, 95–97,
check valve 31, 33, 157 99, 121, 182
circuit diagram 5, 6, 29, 57 common-mode rejection 241
– simplifying of 29 common-mode-rejection ratio 72, 73, 85, 92, 93
clamping 154, 155 common-source circuit 95
class A amplifier see amplifier, with A operating comparator 69, 70, 223, 225, 226, 235, 269,
point 286, 292, 304, 331, 341, 346–351, 353, 355,
class B amplifier see amplifier, with B operating 359, 362
point – fast 325
comparator level 347
class C amplifier see amplifier, with C operating
comparator response 346, 351
point
complement output 318, 323
clipping 55–59, 57, 274
compliance range 120
clock 282, 300
compliance voltage 120, 121
– internal rate 299
computer 2
– master 294
– distributed 324
– two-phase 313
– general-purpose 340
– ytterbium 221
computer chips 301
clock generators 230
computer interface 362
clocked VFC 290
computer language 297
CMOS 73, 277, 300, 301, 303, 304
conductance 9
CMOS technology 277
– damping 268
CMRR see common-mode-rejection ratio
– specific 188
coaxial cable 189–194, 282, 283 conducting loops 189
– input impedance 193 conjugate variable 146
Cockcroft–Walton 159 continuous spectrum see spectrum
code 297–299, 322, 326, 327, 332, 340 conversion
– 2-4-2-1 322 – analog 283–293
– 5-4-2-1 323 conversion factor 219, 286, 287
– 8-4-2-1 322 conversion rate 327
– one-hot see code, one-out-of-ten converter
– one-out-of-ten 317, 322 – amplitude-to-digital 330, 341, see also ADC
– weighted 297, 298, 322 – amplitude-to-frequency 287–290
code converter 294 – amplitude-to-time 285, 287
coincidence 221, 274, 275, 278, 290, 294, 296, – compound
299, 337 – voltage-to-time 286
– delayed 274 – current-to-voltage
– overlap 338 – logarithmic 121
coincidence circuit 274, 275 – digital-analog 349
coincidence unit 338 – digital-to-amplitude see DAC
386 | Index
digitizing 123, 279, 283, 326–364, 327, droop rate 280, 281
329–331, 335, 336, 353, 360 DSO see oscilloscope, sampling
– process 330, 339 duality 7–9, 12, 14, 15, 17, 19, 22, 28, 30, 37, 52,
– duration of 327, 329 84, 85, 87, 89, 90, 96, 140, 162, 173, 181,
– quality of 327 219, 247, 249, 263, 264, 269, 299, 346
– time 355 duty cycle 147, 229, 230, 338
digitizing error 335 duty factor 339, see also duty cycle
digitizing of amplitude dwell time see counter dwell time
– direct 341–352 dynamic range 35, 60, 67, 71, 72, 88, 110, 202,
digitizing of frequency 203, 272, 274, 324, 327, 364
– direct 338, 339 – small-signal 31
digitizing of time
– direct 335–337 E
diode 55, 110, 158, 159 earth 29, 190
– clipping 57, see also clipping earth potential 29, 190
– Esaki see diode, tunnel ECL 301, 303
– impedance of 156 electronic switch see switch
– junction 29, 33, 34 emitter 153
– Schottky see Schottky technology emitter follower 98, 99, 122, 152–154, 185, 187,
– tuning see capacitor, variable 192, 207
– tunnel 47 – complementary 153
– varactor 288, see also capacitor, variable emitter resistance 50
– Zener 32, 58, 110 – external 234
diode discriminator see discriminator – internal 184, 234
diode limiter 105, see also clipping emitter resistor 93
diode logic 275 emitter-coupling 303
diode pump 291, 292, 349 enable input 296, 309, 310
– active 346 enable signal 304
direct current 4, 12, 123, 142, 157, 232, 362 encoder see also code
– pulsating 157 – mechanical 327, 354
discharging 158 – optical 354
discrete spectrum see spectrum EOR gate see gate, exclusive-OR
discriminator 226, 269–271, 273, 274 error
– parallel-diode 59 – scaling 341
– serial-diode 59 – zero off-set 341
display 26, 28, 323, 362, 363 Esaki diode see diode, tunnel
display decoder 319 Euler’s identity 131
dissipative element 4, 7, 12, 19 events per second 338
distribution 271, 332, 333 EXOR gate see gate, exclusive-OR
dither 330 exponential voltage-to-current converter see
divide by 2 316 converter, voltage-to-current, exponential
DMM see multimeter, digital
double differentiation 169 F
drain current 65 fall time 147, 149, 151–153, 221, 233, 302, 324,
drain voltage 65 see also rise time
drain–source resistance 241 False 295
drift 72, 74, 347, 356 fan-in 302
– thermal 72 fan-out 302
driver circuit 294 Farad 7
388 | Index
X Z
XOR gate see gate, exclusive-OR zero-crossing discriminator 357
z-parameter 40, 54, 89
Y
y-parameter 39, 40, 86
ytterbium clock see clock, ytterbium