0% found this document useful (0 votes)
53 views23 pages

Notes Lecture 8

This document contains lecture notes on higher-order circuits and state-space formulation. It introduces state-space models to analyze circuits with more than two active elements like inductors and capacitors. The state vector is defined as the currents through capacitors and voltages across inductors. The input vector contains the circuit's voltage and current sources. Kirchhoff's laws are applied to derive the state equation relating the time derivative of the state vector to the state and input vectors. As an example, state-space models are developed for a fourth-order circuit with four active elements.

Uploaded by

nelsongil211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views23 pages

Notes Lecture 8

This document contains lecture notes on higher-order circuits and state-space formulation. It introduces state-space models to analyze circuits with more than two active elements like inductors and capacitors. The state vector is defined as the currents through capacitors and voltages across inductors. The input vector contains the circuit's voltage and current sources. Kirchhoff's laws are applied to derive the state equation relating the time derivative of the state vector to the state and input vectors. As an example, state-space models are developed for a fourth-order circuit with four active elements.

Uploaded by

nelsongil211
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Lecture Notes

Gen EE I
- Fall 2023 -

Lecture 8
1 Higher-Order Circuits: State-Space Formulation
and Steady State Regime
In the last lecture we have learned that second-order circuits result from
the presence of two active elements (inductors or capacitor) in such an ar-
rangement that prevent their direct combination into a single equivalent.
We have also seen that the solution of a second order differential equation
can also be obtained by transforming it into a system of first-order ODEs.
The circuits we analysed using the tools we learned, however, were rather
simple. In fact, due to the duality that can be established between the
series RLC and parallel RLC circuits, we have found that both pose exactly
the same mathematical problem. Specifically, in the series RLC a single
variable of interest (the current over the inductor) suffice to characterise
the circuit, while in the parallel RLC, the voltage over the capacitor suffice.
Obviously many active circuits employ not only more then two induc-
tors/capacitors but also exhibit topologies such that cannot be described
fundamentally by a single quantity. In this lecture we will build upon the
methods learned thus far to demonstrate a powerful and complete method
to analyse the transient behaviour of active circuits of arbitrary order and
configuration.
In the process, we shall not only unify the Laplace Method and the
method of linear systems of ODEs, but also draw a parallel between this
advanced tool of circuit analysis and the linear-algebraic method introduced
in Lecture 2 to solve general resistive circuits.

Building the State Space Model


Consider the circuit depicted in Figure 1, which contains four active ele-
ments, specifically, the inductors L1 and L2 , and well as the capacitors C1
and C2 . Since these four active elements are arranged in an inde-
pendent manner – that is, such that they cannot be directly combined –
there are electric quantities in the circuit which can only be fully described

1
by a fourth-order differential equation, so that the circuit is said to be
of fourth-order.

iS1 (t) iL1


L1 C2 v C2
iR

vS1 (t) n1 n2
R iL2
C1 v C1 L2 iS2 (t)
vS2 (t)

Figure 1: A sourceless series RLC circuit.


While it is possible to obtain such a fourth order differential equation, we
have seen that the solution of higher-order ODEs is obtained via their re-
duction to a system of ODEs. With that in mind, we shall instead obtain
the linear system of ODEs directly.
As seen in Lecture 7, the linear system of ODEs can be conveniently writ-
ten in the form of a single vector equation, which describes the rate of
variation of the electric quantities in the circuit at a given time t – that is,
ẋ(t) , ddt x(t) – as a function of the values of those quantities at that time
– that is, x(t).
In other words, the vector x(t) can be seen as describing the state of the
circuit at time t; and the linear system of ODE to describe the variation
of such a state. Therefore, such an equation will be hereafter referred to as
the State Equation of the circuit.

The State Equation of an n-order circuit is an n-th dimensional


vector differential equation that describes the variation
of the differential quantities in the circuit.

• The State Vector


The first step to derive the state equation is to identify which variables
(voltages/currents) in the circuit have a fundamentally differential re-
lationship with its dual (current/voltage). These are, of course, the
currents of capacitors and the voltages of inductors, since

2
d d
iC (t) = C vC (t) and vL (t) = L iL (t). (1)
dt dt
These are quantities (variables) whose state vary over time and there-
fore are referred to as state variables, such that the vector composed
by them is referred to as the state vector.
For the circuit of Figure 1, we have therefore
 
vC1 (t)
 vC2 (t) 
 iL1 (t)  .
x(t) =   (2)
iL2 (t)

• The Input Vector


While the state vector plays a central role in the transient analysis of
circuits with inductors and capacitors, we have also learned that in the
absence of an external power source, all transient signals eventually
vanish to zero due to resistances (either intrinsic to the elements or
of resistors in the circuit).
In order to maintain a response, therefore, the circuit requires an in-
put of energy, in the form of current or voltage sources. Such quan-
tities put together into a vector are referred to as the input. For the
circuit in Figure 1, there are two power sources vS1 (t) and iS2 (t), such
that the input vector is given by
 
vS1 (t)
u(t) = . (3)
iS2 (t)

• The State Equation


The state equation relates the rate of variation of the state vector
with itself as well as the input vector, such that its general form is

ẋ(t) = A · x(t) + B · u(t), (4)

where for notational convenience we adopted Leibniz notation for the


derivative, namely, ẋ(t) , ddt x(t).
What is left for us to do now is to apply Kirchhoff’s Laws to the
circuit shown in Figure 1, and write the set of equations obtained in
the form of equation (4). Since we know that the currents through
capacitors depend on the derivative of their voltages, while the
voltage over inductors depend on the derivative of their current,
it will prove convenient to adopt the strategy stated below.

3
Find the currents through capacitors,
and voltages over inductors.

Write equations containing only quantities (voltages or currents)


that are either in the state vector or the input vector.

Let us start with a node analysis on node n1 . From the KCL, we


obtain the current through capacitor C1 can be determined as
vn1 − vn2 (vS1 − vC2 ) − vC1
iC1 = iL1 − iR = iL1 − = iL1 + (5)
R R
where iC1 (t) is the current flowing into capacitor C1 , while vn1 (t)
and vn2 (t) are the voltages on nodes n1 and n2 , respectively; and
where we have omitted the explicit dependence of all variables on t,
for notational convenience.
Then, from equation (5) and Volta’s Law of Capacitance, we have
dvC1 vC v C2 iL vS1
iC1 = C1 =⇒ v̇C1 = − 1 − + 1 + 0iL2 + + 0vS2
dt RC1 RC1 C1 RC1
(6)

Next, let us apply the KCL to node n2 to find the current through
capacitor C2 , which yields,
(vS1 − vC2 ) − vC1
iC2 = iL2 − iR − iS2 = iL2 + − iS2 , (7)
R
thus
dvC2 vC vC2 iL vS1 iS
iC2 = C2 =⇒ v̇C2 = − 1 − + 0iL1 + 2 + − 2.
dt RC2 RC2 C2 RC2 C2
(8)

Thirdly, the voltage over inductor L1 can be obtained through the


KVL over the leftmost mesh of the circuit, yielding

vL1 = vS1 − vC1 , (9)

such that
diL1 vC vS
vL1 = L1 =⇒ i̇L1 = − 1 + 0vC2 + 0iL1 + 0iL2 + 1 + 0iS2 .
dt L1 L1
(10)

Finally, proceeding similarly we have

vL2 = vS1 − vC2 , (11)

4
such that
diL2 vC vS
vL2 = L2 =⇒ i̇L2 = 0vC1 − 2 + 0iL1 + 0iL2 + 1 + 0iS2 .
dt L2 L2
(12)

Combining equations (6), (8), (10) and (12) into a single vector equa-
tion ultimately leads to
 −1 −1 1 1
0 0
  
  RC1 RC1 C1   RC1
v̇C1 
 −1 −1 1
 vC1 
1 −1

 v̇C2   RC2 RC2 0 C2



vC2  
 RC2 C2

 vS1

 i̇L1  =  −1
   · + ·
 L 0 0 0 
 iL1   1
0  iS2

L1

i̇L2  1  iL2  
−1 1
0 L2 0 0 L2 0
(13)
• The Output Equation
While the State Equation suffice to describe the behaviour of the cir-
cuit fundamentally, one is often interested in other quantities, rather
then the ones listed in the state vector. In the case of the circuit be-
ing studies here, for example, one may be interested in the current
drawn from the voltage source vS1 and the in the voltage over the cur-
rent source iS2 . Furthermore, one could perhaps be interested in the
current flowing through the resistor R.
The output quantities of interest, put into a single vector, compose the
output vector. For the sake of illustration we shall take the output
vector to be  
iS1 (t)
y(t) =  vS2 (t)  . (14)
iR (t)
Notice that unlike the state vector, the output vector is of free choice,
such that its dimension is not directly related to the order of the
circuit. In fact, if a single output quantity is of interest, the output
vector can be a scalar.
Another consequence of the choice aspect over the output vector is
that it needs not (or should not) be related to any differential quan-
tity. Instead, the output vector is related to the input vector and the
(instantaneous) state vector directly, such that the associated output
equation of linear circuits is of the linear form
y(t) = C · x(t) + D · u(t). (15)

Write equations containing only quantities that are either in the


state vector, the input vector or the output.

5
For the circuit in Figure 1 we have, straightforwardly,

vC1 − (vS1 − vC2 ) vC vC vS


iR = = 1 + 2 + 0iL1 + 0iL2 − 1 + 0iS2 , (16)
R R R R
vS2 = vS1 − vC2 = 0vC1 − vC2 + 0iL1 + 0iL2 + vS1 + 0iS2 , (17)
vC vC vS
iS1 = iL1 +iC2 = iL1 +(iL2 −iS2 −iR ) = − 1 − 2 +iL1 +iL2 + 1 −iS2 .
R R R
(18)
From equations (16) thorough (18) we finally obtain
 −1 −1     1 
  R R 1 1 vC1 R −1
iS1   
vC2   vS1
    
 vS2  =  0 −1 0 0  · 

+ 1 0 ·

.
  iL1   iS2
iR
 
1 1
0 0 iL2 −1
0
R R R
(19)

• The Boundary Equation


The last ingredient of our State Space model of linear circuits of ar-
bitrary order is an equation that describes the initial state of the
circuit, i.e., the boundary conditions of the underlying set of dif-
ferential equations.
Obviously, since the state of the circuit is described by the state vector
x(t), and the initial conditions are noting but the initial state of the
circuit, the required equation is of the form

x(0) = x0 . (20)

A linear circuit of arbitrary order is fully characterized


by the following system of equations:
ẋ(t) = A · x(t) + B · u(t) (State Equation) (21)
y(t) = C · x(t) + D · u(t) (Output Equation) (22)
x(0) = x0 (Boundary Equation) (23)

Solving the State Space System


Let us now turn out attention to solving the system described above. To
start with, we may recognise that (expect for its vector form) the State
Equation is nothing but a complete version of the first-order differential
equation that appeared repeatedly and was studied extensively during our
discussion of first-order circuits in Lecture 6. It will therefore prove educa-
tional to reconsider that familiar problem.

6
• Revisiting the Series RL with a Source
Reconsider the series RL circuit with an active external power source,
first studied in Lecture 6 and depicted below in Figure 2. We have
learned that this circuit is characterized by the linear ODE

dx(t) R vS (t) dx(t)


+ x(t) = or − a x(t) = b u(t), (24)
dt L L dt
where we have relabelled the functions x(t) = i(t) and u(t) = vS (t)
to fit with the current context, a , −R 1
L , b , L , and we shall briefly
return to Newton’s notation of derivative for future convenience.
L

i v
vS (t) R

Figure 2: An RL series circuit driven by a voltage source vS (t).

We have also shown that the solution of such simple differential equa-
tions can be obtained either by substitution (of a known general
solution), by direct integration, or by the Laplace Transform.
Here, however, we shall apply yet a fourth method, that curiously
shares some similarities to all the above simultaneously.
To this end, multiply equation (24) by e−at to obtain

dx(t) d[e−at x(t)]


e−at − e−at a x(t) = e−at b u(t) =⇒ = e−at b u(t).
dt dt
(25)
Distributing the differential on time dt to the righthand side and in-
tegrating both sides of the equation yields

Zt Zt Zt
d[e x(τ )] = e−aτ b u(τ ) dτ
−aτ
=⇒ e−at
x(t)−x(0) = e−aτ b u(τ ) dτ,
0 0 0
(26)
or simply
Zt
x(t) = e x(0) + e e−aτ b u(τ ) dτ.
at at
(27)
0

7
Notice that this general solution of a first-order ODE shares:

- A similarity with the substitution method, in the sense that the


exponential function e−at is used to obtain the solution;
- A similarity with the integration method as the solution is indeed
obtained by integration;
- A similarity with the Laplace Transformed method, as the bound-
ary condition is already embedded in the solution.

Curiosities apart, let us finally substitute the values of x(0) = i(0) =


I0 , a = −R 1
L , b = L and u(t) = VS and solve the integral, to obtain

−R −R  V /L R  t −R VS  −R 
x(t) = I0 e L t +e L t Lτ L t+ 1 − e L t , (28)
R/L e
S
= I0 e
0 R
which is (obviously) identical to the solution found in Lecture 6.

This technique to solve ODEs is known as the


Integrating Factor method.

Like the Laplace Transform method, it produces truly general solu-


tions, but can become cumbersome for higher order equations. In any
case, in the next session we shall demonstrate how this technique is
particularly suitable to solve the state equations in such as that in
equation (21).

• The Integrating Factor Method and the State Space Equation


Consider a vector form generalisation of equation (27), namely

Zt
x(t) = e At
· x0 + eAt
· e−Aτ · B · u(τ ) dτ. (29)
0

Here, we encounter the problem, namely, how to evaluate the matrix


exponential eAt .
Indeed, notice that expoentiation is not a linear operation, such
that eAt cannot be calculated distributively, that is, by exponen-
tiating each element of the matrix At separately.
To elaborate, recall that we have seen in Lecture 6 that the expo-
nential function eat is nothing but a convenient way to represent the
converging result of the power series

X tn
an ≡ eat . (30)
n!
n=0

8
From this equation, it follows that in the matrix case we also have


X tn
eAt = An . (31)
n!
n=0

The problem, however, is that matrix powers behave very differently


from scalar powers. Specifically, while scalar powers are monotonic,
while matrix powers in general are not.
To illustrate, consider the simple matrix
 
1 −1
A, . (32)
1 1

One can easily verify that


       
2 0 −2 3 −2 −2 4 −4 0 5 −4 0
A , ,A , ,A , ,A , ,
2 0 2 −2 0 −4 −4 −4
(33)
and so on.

Matrix powers are not monotonic!

The lack of monotonicity in matrix powers makes it very hard to


attempt deriving the converging result of an infinite matrix power
series directly.
Fortunately, a very elegant solution to this problem exits, thanks to
the Laplace Transform. Indeed, recall the Laplace Transform pair
1
eat ←→ , (34)
s−a
which for the matrix exponential can be written as

eAt ←→ (sI − A)−1 or eAt = L−1 [(sI − A)−1 ]. (35)

Before we proceed, let us remark that we have seen the matrix sI − A


before. This matrix is in fact the matrix whose determinant describes
the characteristic polynomial associated with the fundamental dif-
ferential equation describing the circuit.
The recurrent appearance of the matrix (sI − A)−1 in linear algebra
problems earns it the name resolvent of A.

The resolvent of a matrix A is defined as (sI − A)−1 .

9
Returning to our discussion, obviously from equation (35) we have

e−At = L−1 [(sI + A)−1 ]. (36)

In possession of the results in equations (35) and (36), and in light of


the fact that integration is a linear operation we can evaluate equa-
tion (29) by distributing the integral to each element of the integrand
matrix.

• Numerical Example
In order to fully illustrate the State Space method, let us discuss a nu-
merical example based on the circuit of Figure 1 and its state equation
as given in (13). Let, for instance1 R = 10Ω, C1 = 0.1F, C2 = 0.2F,
L1 = 2H and L2 = 1H.
With these figures, the coefficient matrix of the state space equation
of the circuit becomes
−1 −1 10 0
 
 
 −1 −1 0 5 
 2 2 
A=  −1
. (37)
0 0 0 

 2
 
0 −1 0 0

The associated resolvents (of A and -A) are respectively given by

(sI − A)−1 = (38)


−2s2
 s 4s 10 40 −10s

+ +
3(s2 +5) 3(2s2 +3s+10) (s2 +5)(2s2 +3s+10) 3(s2 +5) 3(2s2 +3s+10) (s2 +5)(2s2 +3s+10)
−s2 2s 2s −10s 10 10
 + + 
 (s2 +5)(2s2 +3s+10) 3(s2 +5) 3(2s2 +3s+10) (s2 +5)(2s2 +3s+10) 3(s2 +5) 3(2s2 +3s+10) ,
−1 −2 s 4s+6 s 5
 + + 
6(s2 +5) 3(2s2 +3s+10) (s2 +5)(2s2 +3s+10) 3(2s2 +3s+10) 3(s2 +5) (s2 +5)(2s2 +3s+10)
s −2 −2 10 2s+3 2s
+ +
(s2 +5)(2s2 +3s+10) 3(s2 +5) 3(2s2 +3s+10) (s2 +5)(2s2 +3s+10) 3(2s2 +3s+10) 3(s2 +5)

and

(sI + A)−1 = (39)


2s2
 s 4s −10 −40 −10s

+ +
3(s2 +5) 3(2s2 −3s+10) (s2 +5)(2s2 −3s+10) 3(s2 +5) 3(2s2 −3s+10) (s2 +5)(2s2 −3s+10)
 2 s2 2s + 2s −10s −10
+ −10 
 (s +5)(2s2 −3s+10) 3(s2 +5) 3(2s2 −3s+10) (s2 +5)(2s2 −3s+10) 3(s2 +5) 3(2s2 −3s+10) .
 6(s21+5) + 3(2s2 −3s+10)
2 s 4s−6 s −5
+ 
(s2 +5)(2s2 −3s+10) 3(2s2 −3s+10) 3(s2 +5) (s2 +5)(2s2 −3s+10)
s 2 2 −10 2s−3 2s
+ +
(s2 +5)(2s2 −3s+10) 3(s2 +5) 3(2s2 −3s+10) (s2 +5)(2s2 −3s+10) 3(2s2 −3s+10) 3(s2 +5)

1
Remember that these values of capacitance and inductance are absurdly large for
a real-life circuit. We use these simple values, however, solely for illustration purposes.

10
We can now obtain the inverse Laplace Transform of each separate
term of these equations in order to build the matrices required to
finally compute x(t) according to equation (29). Obvious such calcu-
lations are too laborious to do by hand (especially in class), so we will
not pursue the solution to the end. Let us however just point out that
the inverse Laplace Transform of all the elements of both matrices
above can be easily obtained, as they are all combinations of functions
whose Laplace Transforms we already know.
To show this, simply recall the method of partial fraction expan-
sion, which we shall illustrate using the most complicated of all matrix
elements appearing above as an example. Indeed, making
2s2 as + b cs + d
2 2
= 2 + 2
(40)
(s + 5)(2s − 3s + 10) (s + 5) (2s − 3s + 10)
we obtain the linear system
−2
2 0 1 0 a 0 a
         
3
         
 −3 2 0 1    b   2 
     b   0 
    
· =  =⇒ 
  =  4  . (41)
   
 10 −3 5 0   c   0   c   3 
     
         
0 10 0 5 d 0 d 0

From the latter we obtain


2s2 4s 2s
2 2
= 2
− 2
. (42)
(s + 5)(2s − 3s + 10) 3(2s − 3s + 10) 3(s + 5)

Each term of the latter summation is therefore a special case of the


function
a·s+b
I(s) = 2 , (43)
s + 2αs + ω 2
which we have seen in Lecture 7 to be associated with the solution og
second-order ODEs.
In summary, the inverse Laplace Transform of each element of the
resolvent matrices shown above are, in general, a combination of
exponentials, exponentials pulses, and exponentially decaying
sinusoidals.
• Direct Solution via Laplace Transform
Having introduced the notion of resolvent matrix and its relation-
ship with the matrix exponential, let us conclude our discussion on
the State Space method by showing that a direct solution of the state
space equation can also be obtained directly via the Laplace Transform
and its inverse.

11
Indeed, let us apply the Laplace Transform directly to equation (21),
which yields

L[ẋ(t)] = A·L[x(t)]+B·L[u(t)] =⇒ sX(s)−x0 = A·X(s)+B·U(s).


(44)
Rearranging, we have

(sI − A) · X(s) = x0 + B · U(s), (45)

or finally

X(s) = (sI − A)−1 · x0 + (sI − A)−1 · B · U(s). (46)

Obviously this solution must be identical to that offered via equation


(29). And since the first term of equations (29) and (46) are the same,
the following identity can be obtained by direct comparison of these
equations

e At
· e−Aτ · B · u(τ ) dτ ←→ (sI − A)−1 · B · U(s). (47)
0

This term of the solution is obviously independent of the initial condi-


tions determined by x0 , and instead is fully determined by the input
vector u(t) (or its Laplace Transform U(s)), the coefficient matrix B,
and the resolvents of A and −A.
For this reason, this component of the solution is referred to as the
forced solution.
From the above, it can be said that both the direct Laplace Transform
and the Integrating Factor methods are equivalent, differing only in
the approach to compute the forced solution.
Steady-state Regime and the Transfer Function
Having understood that electronic circuits operating time-varying signals
are in fact multiple-input, multiple-output systems, whose behaviour
depend only momentarily (during the transient stage) on certain initial
conditions, we are better equipped to understand the notion of a steady-
state regime.
Indeed, this terminology was utilised frequently so far without a formal
definition. Let us correct this deficiency now.

The steady-state behaviour of a circuit


is defined as its forced response.

Mathematically, the steady state regime of a linear circuit is fully charac-


terised by equation (47).

12
• The (Steady-state) Transfer Function
Let us briefly study the relationship between the ouput of a circuit
in the steady-state regime and its input. To this end, let us return to
the state space equations in the steady-state case and in the Laplace
Transform domain, which (from the above) reduce to

X(s) = (sI − A)−1 · B · U(s), (State Equation) (48)


Y(s) = C · X(s) + D · U(s). (Output Equation) (49)

Substituting equation (49) into (48) readily yields

Y(s) = (C · (sI − A)−1 · B + D) · U(s). (50)

We can therefore define the function


Y(s)
H(s) , = C · (sI − A)−1 · B + D. (51)
U(s)

The function H(s) is referred to as the Transfer Function of the sys-


tem. Notice that it aggregates into a single matrix, all the information
about the entire circuit (or more generally, of the “system”), which is
contained in the matrices A, B, C and D.
In the particular case of the examples here considered (that is, linear
circuits with inductors and capacitors), such matrices are con-
stant, such that the transfer function is said to be time-invariant.
Notice also that the transfer function is sufficient to model the steady-
state (but not the transient!) response of the system, to any given
input. Specifically, for any arbitrary input U (s), the response of the
system is given by
Y(s) = H(s) · U(s). (52)

Obviously the response described by this equation (52) is not in time,


but rather in the abstract domain of the variable s. We have seen,
however, that when transformed back to the time domain via an ap-
propriate Laplace Transformation pair, the variable s is replaced by
roots of polynomials, which are in general complex quantities. As a
consequence, oscillatory signals result, whose frequency and phase
are determined by s (or more specifically, by the roots in s).

The “abstract” domain of the variable s is referred to as the


frequency domain.

13
We shall better study the frequency-domain response of circuits in the
future, but for now let us return to our time-domain analysis. To
this end, we need to obtain a time-domain version not only of the
transfer function itself, but also of the “transfer equation.”
To this end, consider the following function

Zt
y(t) , h(τ )u(t − τ ) dτ dt, (53)
0

where the functions y(t), h(t) and u(t) should (for now) be taken to
be unrelated to the functions y(t), h(t) and u(t) appearing above.
Consider then the Laplace Transform of y(t) defined above, which is
given by

Z∞
 t 
Z
L[y(t)] = Y (s) = e−st  h(τ )u(t − τ ) dτ  dt. (54)
0 0

The integral above is, unlike all integrals we have seen so far, a double
integral, so let us take a moment to discuss it qualitatively.
First, notice that the inner integral runs from τ = 0 until τ = t,
producing a result that is a function of t (no longer of τ ). Then, the
second integral runs from t = 0 to t → ∞, producing a result that is
a function of s (due to the exponential term).
Altogether, the integration takes place over a portion of the two-
dimensional space τ × t defined by the equations τ = 0 and τ = t,
“screened” horizontally from τ : 0 → t, with t growing from t : 0 → ∞.
This integration screening is as depicted in Figure 3(a).

t t
t

t
=

=

⌧ ⌧
(a) Voltage source (b) Current source

Figure 3: Integration screenings of the region defined by τ = 0 and τ = t.

14
Since the order of integration does not alter the result, however, we
can swap the two integrals, as long as the integration region remains
the same. To this end, consider the following alternative version of
equation (54)
Z∞
∞ 
Z
Y (s) = h(τ )  u(t − τ )e−st dt dτ, (55)
0 τ

where the integration screening is now according to Figure 3(b).


Next, let us apply a change of variables at the inner integral, such that
r = t − τ =⇒ dr = dt and r : 0 → ∞, which yields
Z∞
∞ 
Z
Y (s) = h(τ )e−sτ  u(r)e−sr dr dτ, (56)
0 0

where we have distributed the term e−sτ outside the inner integral, as
that term is not a function of r.
But now, the integral within the parenthesis is not dependent on τ
and therefore can be taken our of the outer integral, leading to the
following product of decoupled integrals
∞  ∞ 
Z Z
Y (s) =  h(τ )e−sτ dτ  ·  u(t)e−st dt = H(s) · U (s), (57)
0 0

where we relabelled r = t for referential convenience, and have identi-


fied that the two decoupled integrals are in fact the Laplace Trans-
forms of the functions h(t) and u(t), respectively.
Combining equations 54 and (57) finally gives
 t 
Z
Y (s) = L  h(τ )u(t − τ ) dτ  = H(s) · U (s), (58)
0
or
Zt
h(τ )u(t − τ ) dτ ←→ H(s) · U (s). (59)
0

The integral appearing in the Laplace Transform pair shown above is


known as the convolution of the functions h(t) and u(t).
In plain words, the result demonstrated above can be stated as follows.

The Laplace Transform of the convolution of two functions is the


product of their individual Laplace Transforms.

15
• The Dirac Delta Function and the Impulse Response
Comparing equations (52) and (59), we can now identify that while the
frequency domain (steady-state) response of a system characterised
by the transfer function H(s) is given by the product of the Laplace
Transforms of the transfer function and the input, the time-domain
response of the system is the convolution of the time-domain input
and the function H(t), which in turn is the Laplace Pair of H(s).
A fair question to ask, however, is:

What exactly is the Laplace Transform pair


of a system’s transfer function?

In order to answer this question, consider the following function, de-


fined as (
1/T if 0 6 t 6 T ,
f (t) = (60)
0 if t > T .

The function defined above is what is referred to as a squared pulse,


as it has a squared shape, and exists (i.e. is non-zero) only for a brief
duration T .
Notice that the integral of f (t) is given by

Z∞ ZT
t t=T
f (t) dt = f (t) dt = = 1, ∀ T. (61)
T t=0
−∞ 0

In other words, the squared pulse has a unitary integral, regardless of


how short it is. Notice however, that as the pulse becomes shorter and
shorter, its hight becomes higher and higher, such that as T → 0, the
pulse:

- Exists only exactly at t = 0,


- Has an infinite amplitude,
- Has a unit integral.

Functions with such a characteristically short duration are called im-


pulses. And given the above, we can define an abstract impulse func-
tion with the following properties
(
0 if t 6= 0,
δ(t) = (62)
∞ if t = 0,

16
with
Z∞
δ(t) dt = 1. (63)
−∞

This particular impulse function is referred to as the Dirac Delta


(impulse) function, or simply the unit impulse.
Let us take a moment to consider what would be the Laplace Trans-
form of the unit impulse. By definition, we have

Z∞ ZT
e−st (1 − e−sT )
L[δ(t)] = δ(t)e−st dt = lim dt = lim
T →0 T T →0 sT
0 0
= lim e−sT = 1, (64)
T →0

where we emphasize the change in the upper integration limit in the


second integral, and the use of the l’Hospital’s rule in the last limit
expression of the first line.

The Laplace Transform of the Dirac Delta function is the unit.

Now, obviously, the Dirac Delta function (or unit impulse) can be
considered as an input to a circuit. We may therefore ask what is the
response of the circuit to the unit impulse input.
Such a response can be easily obtained using the results of equations
(59) and (64). Specifically, we have

Zt
−1
L [H(s)] = h(τ )δ(t − τ ) dτ and L−1 [H(s)] = h(t), (65)
0

or simply

Zt
h(t) = h(τ )δ(t − τ ) dτ. (66)
0

In plain words, the result given in equation (66) can be stated as


follows.

The inverse Laplace Transform of a system’s Transfer Function


is the system’s response to the unit impulse.

17
• Transfer Function and Ohm’s Law
The concept of Transfer Function is a very powerful tool in circuit
analysis, especially in certain classes of circuits such as amplifiers,
filters, oscillators etc. This is because in the context of circuit analysis,
this concept can be understood as a generalisation of Ohm’s Law.
To illustrate this, consider the amplifier circuit shown in Figure 4.

1⌦ 1F

1⌦
i
vin
VS
1F Vout
1⌦

Figure 4: A second-order circuit with an operational amplifier.

As studied extensively in Lecture 5, in a circuit such as this, one is


typically interested in the relation Vout /VS , such that it is intuitive
that the entire circuit can be represented by a given transfer function.
Unlike the circuits seen in Lecture 5, however, this circuit is of second
order, due to the presence of two capacitors that cannot be combined,
such that the transfer function of the circuit can be laborious as seen
previously.
Next, consider the three fundamental components of electric circuits
and their corresponding input/output relationships, but under a Laplace
Transform perspective, that is

V (s)
V (t) = RI(t) ←→ = R, (Ohm’s Law of Resistance) (67)
I(s)
di V (s)
V =L ←→ = sL, (Faraday’s Law of Induction) (68)
dt I(s)
and
dv V (s) 1
I=C ←→ = . (Volta’s Law of Capacitance) (69)
dt I(s) sC

18
1⌦ 1/sF

1⌦
i
vin
VS
1/sF Vout
1⌦

Figure 5: Circuit of Figure 4 transformed to the “frequency domain” s.


Using these “frequency-domain” representations of the key circuit ele-
ments, the circuit shown above can be simplified into the circuit shown
in Figure 5, which resembles a purely resistive OpAmp circuit.
Analysing the circuit that way, we first obtain the voltage vin by KVL
over the left-most mesh, which now becomes nothing but a voltage
divider, such that
1 s
vin = VS = VS . (70)
1 + 1/s 1+s
Next, using the ideal OpAmp rule that the voltages at positive and
negative inputs are virtually the same, we have from the mesh going
through the OpAmp inputs
s 1
i = VS − vin = VS − VS = VS . (71)
1+s 1+s
Since the current i does not go into the OpAmp, but rather to the
output via the parallel combination of a resistor and a capacitor, we
have
s 1 1/s
vin − Vout = i · (1||1/s) =⇒ Vout = VS − · VS ,
1+s 1 + s 1 + 1/s
(72)
which simplifies to
s 1 s2 + s − 1
Vout = VS − V S = VS . (73)
1+s (1 + s)2 (1 + s)2
In other words, the transfer function of the circuit is given by
Vout s2 + s − 1 1 1
h(s) = = =1− − (74)
VS (1 + s)2 1 + s (1 + s)2

19
Now, recall that

f (t) = δ(t) ←→ F (s) = 1, (75)


1
f (t) = e−t ←→ F (s) = , (76)
1+s
1
f (t) = t e−t ←→ F (s) = . (77)
(1 + s)2

Using these familiar results we can finally obtain the circuit’s response
to a unit impulse, namely

h(t) = δ(t) − e−t − t e−t . (78)

• Steady State Response and the Transfer Function in Time


The transfer function of a circuit in time domain can obviously also
be used to obtain the forced response (stead-state) of the circuit
to an arbitrary input. To this end, recall that in fact as summarized
by equation (59), the convolution of h(t) with a given input u(t)
is the transformation pair of the product of corresponding Laplace
Transforms.
Consider then the convolution between the time-domain transfer func-
tion given in equation (78), and a constant input u(t) = VS , which
yields

Zt Zt Zt
y(t) = h(τ ) · VS dτ = VS δ(τ ) dτ − VS (1 + τ ) e−τ dτ
0 0 0
= VS − 2VS + VS (2 + t)e−t = −VS + VS (2 + t)e−t . (79)

Notice that equation (79) indicates that at t = 0, the circuit outputs

y(t) = VS . (80)

In other words, for a very short amount of time, immediately as the


source VS is connected, the circuit behaves as a unit-gain voltage
buffer. We have seen in Lecture 5 that one design for a unit-gain
voltage buffer is as shown in Figure 6.
Now, consider that at the very moment when the source is connected to
the circuit of Figure 4, the voltages over the capacitors grow abruptly.
But due to the relationship i(t) = C dv(t)
d t , this implies that the current
over the capacitors at that moment are (in theory) infinitely large.

20
V in
VS
Vout

Figure 6: Unit gain buffer circuit using an OpAmp.

In practice, this indicates that in this circuit capacitors behave like


short-circuits at t = 0. In other words, the circuit of Figure 4, at
t = 0 is equivalent to the circuit shown below in Figure 7.

1⌦

vin

Vout
VS 1⌦

Figure 7: Equivalent of the circuit of Figure 4 at t = 0.

As can be seen by comparing these figures, except for the “leaking”


resistors in Figure 7 (shown in grey), the two circuits are virtually
the same, which serves the purpose of intuitively explaining why the
circuit of Figure 4 behaves like a unit-gain buffer at t = 0.
Next, let us return to equation (79), and study the response of the
circuit at t → ∞. In this case we obtain,

y(t) = −VS . (81)

In other words, in the DC steady state regime, the circuit shown in


Figure 5 behaves as a unit-gain inverting amplifier, or simply, a
voltage inverter.
Once again, we have seen in Lecture 5 that a possible implementation
of a unit-gain inverter is as given below in Figure 8.

21
R

V in
VS
Vout

Figure 8: Voltage inverter circuit using an OpAmp.

Now, let us consider the circuit circuit of Figure 4 taking into account
the fact that at t → ∞ all capacitors become open circuits, such that
an equivalent circuit is as depicted in the Figure below. Once again it
can be seen that except for the resistance in the positive input of the
OpAmp, the two circuits are the same.

1⌦

1⌦
i
vin
VS
Vout
1⌦

Figure 9: Equivalent of the circuit of Figure 4 in the DC-steady state regime.

From all we have learned in this Lecture, we can conclude the following.

The State-Space Method generalizes the Linear Systems Method


from resistive to higher-order circuits.

22
Mandatory Reading

- Rowell [1]
- Martines-Marin [2]
- Alexander and Sadiku [3]: Ch. 8 (Sec. 8.8)

• State-space Formulation
• Integrating Factor Method
• Transfer Functions
• Convolution
• Impulse Response
• Generalization of Ohm’s Law (Frequency Domain)

References
[1] D. Rowell, “Time-domain solution of LTI state equations,” Oct. 2002.

[2] T. Martinez-Marin, “State-space formulation for circuit analysis,” IEEE


Trans. on Education, 2009.

[3] C. K. Alexander and M. M. O. Sadiku, Fundamentals of Electric Circuits,


3rd ed. Mc-Graw-Hill, 2007.

23

You might also like