Fundamentals of Fast Simulation Algorithms For RF Circuit
Fundamentals of Fast Simulation Algorithms For RF Circuit
PAPER
Fundamentals of
Fast Simulation Algorithms
for RF Circuits
The newest generation of circuit simulators perform periodic steady-state analysis of
RF circuits containing thousands of devices using a variety of matrix-implicit
techniques which share a common analytical framework.
By Ognen Nastov, Rircardo Telichevesky, Member IEEE,
Ken Kundert, and Jacob White, Member IEEE
ABSTRACT | Designers of RF circuits such as power amplifiers, KEYWORDS | Circuit simulation; computer-aided analysis; design
mixers, and filters make extensive use of simulation tools automation; frequency-domain analysis; numerical analysis
which perform periodic steady-state analysis and its exten-
sions, but until the mid 1990s, the computational costs of these
I . INTRODUCTION
simulation tools restricted designers from simulating the
behavior of complete RF subsystems. The introduction of fast The intensifying demand for very high performance
matrix-implicit iterative algorithms completely changed this portable communication systems has greatly expanded
situation, and extensions of these fast methods are providing the need for simulation algorithms that can be used to
tools which can perform periodic, quasi-periodic, and periodic efficiently and accurately analyze frequency response,
noise analysis of circuits with thousands of devices. Even distortion, and noise of RF communication circuits such as
though there are a number of research groups continuing to mixers, switched-capacitor filters, and amplifiers. Al-
develop extensions of matrix-implicit methods, there is still no though methods like multitone harmonic balance, linear
compact characterization which introduces the novice re- time-varying, and mixed frequency-time techniques [4],
searcher to the fundamental issues. In this paper, we examine [6]–[8], [26], [37] can perform these analyses, the
the basic periodic steady-state problem and provide both computation cost of the earliest implementations of these
examples and linear algebra abstractions to demonstrate techniques grew so rapidly with increasing circuit size that
connections between seemingly dissimilar methods and to try they were too computationally expensive to use for more
to provide a more general framework for fast methods than the complicated circuits. Over the past decade, algorithmic
standard time-versus-frequency domain characterization of developments based on preconditioned matrix-implicit
finite-difference, basis-collocation, and shooting methods. Krylov-subspace iterative methods have dramatically
changed the situation, and there are now tools which can
easily analyze circuits with thousands of devices. Precondi-
tioned iterative techniques have been used to accelerate
Manuscript received May 23, 2006; revised August 27, 2006. This work was originally periodic steady-state analysis based on harmonic balance
supported by the DARPA MAFET program, and subsequently supported by grants
from the National Science Foundation, in part by the MARCO Interconnect Focus methods [5], [11], [30], time-domain shooting methods
Center, and in part by the Semiconductor Research Center. [13], and basis-collocation schemes [41]. Additional results
O. Nastov is with Agilent Technologies, Inc., Westlake Village, CA 91362 USA
(e-mail: [email protected]).
for more general analyses appear constantly.
R. Telichevesky is with Kineret Design Automation, Inc., Santa Clara, CA 95054 USA Though there are numerous excellent surveys on
(e-mail: [email protected]).
K. Kundert is with Designer’s Guide Consulting, Inc., Los Altos, CA 94022 USA
analysis technques for RF circuits [23], [35], [36], [42],
(e-mail: [email protected]). the literature analyzing the fundmentals of fast methods is
J. White is with the Research Laboratory of Electronics and Department of Electrical
Engineering and Computer Science, Massachusetts Institute of Technology,
limited [40], making it difficult for novice researchers to
Cambridge, MA 02139 USA (e-mail: [email protected]). contribute to the field. In this paper, we try to provide a
Digital Object Identifier: 10.1109/JPROC.2006.889366 comprehensive yet approachable presentation of fast
600 Proceedings of the IEEE | Vol. 95, No. 3, March 2007 0018-9219/$25.00 ! 2007 IEEE
Nastov et al.: Fundamentals of Fast Simulation Algorithms for RF Circuits
several numerical techniques for solving these two where C and G are each N ' N matrices whose elements
formulations of the periodic steady-state problem. are given by
d
C vðtÞ þ GvðtÞ þ uðtÞ ¼ 0 (4) xðtÞ ¼ xðt þ TÞ (8)
dt
for all t. The circuit differential equation system has a As an example, consider the RLC example in Fig. 1,
periodic solution if the input uðtÞ is periodic and there whose associated differential equation system is given in
exists a periodic vðtÞ that satisfies (3). (6) and whose response to a sinusoidal current source from
The above condition for a periodic solution suggests zero initial conditions is plotted in Fig. 2. As is easily
that it is necessary to verify periodicity at every time verified, if the initial condition on the inductor current is
instance t, but under certain very mild conditions this is not zero, and the initial voltage on the capacitor vð0Þ ¼ 30:0,
the case. If the qð&Þ and ið&Þ satisfy certain smoothness then a single period simulation will produce one of the last
conditions, then given a particular intial condition and cycles in Fig. 2.
input, the solution to (3) will exist and be unique. This
uniqueness implies that if vð0Þ ¼ vðTÞ, and uðtÞ ¼ uðt þ TÞ C. State Transistion Function Formulation
for all t, then vðtÞ ¼ vðt þ TÞ for all t. To see this, consider An alternative point of view of the differential equation
that at time T, the input and state are such that it is identical system in (3) is to treat the system as implicitly defining an
to restarting the differential equation at t ¼ 0. Therefore, algebraic function which maps an initial condition, an
uniqueness requires that the solution on t 2 ½T; 2T) N-length vector vo , and a time, !, to the solution of the
replicates the solution on t 2 ½0; T). system at time !, an N-length vector v! . The result of
The above analysis suggests that a system of equations applying this implicitly defined state transition function to
whose solution is periodic can be generated by appending a given initial condition and time is,
the differential equation system (3) with what is often
referred as a two-point boundary constraint, as in
v! ¼ !ðvo ; !Þ (10)
d
qðvðtÞÞ þ iðvðtÞÞ þ uðtÞ ¼ 0 vðTÞ % vð0Þ ¼ 0: (9)
dt and ! can be evaluated by solving (3) for vðtÞ with the
initial condition vð0Þ ¼ vo , and then setting v! ¼ vð!Þ.
Rephrasing the result from above, given a differential
The situation is shown diagrammatically in Fig. 4.
equation system whose nonlinearities satisfy smoothness
conditions and whose input is periodic with period T, if the
solution to that system satisfies vð0Þ ¼ vðTÞ, then the vðtÞ
computed from the initial condition vT ¼ vðTÞ ¼ vð0Þ will
be the periodic steady state. The state transition function,
though implicitly defined, yields an elegant way of ex-
pressing a nonlinear algebraic equation for such a vT , as in
vT % !ðvT ; TÞ ¼ 0: (11)
example is then an RC circuit described by the scalar usually evaluated numerically. This issue will reappear
differential equation frequently in the material that follows.
A. Newton’s Method
As a second more general example, consider the linear The steady-state methods described as follows all
differential equation system given in (4). If the C matrix is generate systems of Q nonlinear algebraic equations in Q
invertible, then the system can be recast as unknowns in the form
f1 ðx1 ; . . . ; xQ Þ
2 3
d
vðtÞ ¼ %AvðtÞ % C%1 uðtÞ (15) 6 f2 ðx1 ; . . . ; xQ Þ 7
dt FðxÞ * 6
6
.. 7¼0
7
(18)
4 . 5
fQ ðx1 ; . . . ; xQ Þ
where A is an N ' N matrix with A ¼ C%1 G. The solution
to (15) can be written explicitly using the matrix
exponential [2] where each fi ð&Þ is a scalar nonlinear function of a q-length
vector variable.
The most commonly used class of methods for
Zt
%At
numerically solving (18) are variants of the iterative
vðtÞ ¼ e vo % e%Aðt%!Þ C%1 uð!Þd! (16) multidimensional Newton’s method [18]. The basic
0 Newton method can be derived by examining the first
terms in a Taylor series expansion about a guess at the
solution to (18)
where e%At is the N ' N matrix exponential. Combining
(11) with (16) results in a linear system of equations for the
vector vT 0 ¼ Fðx+ Þ , FðxÞ þ JðxÞðx+ % xÞ (19)
Zt where x and x+ are the guess and the exact solution to (18),
%AT
ðIN % e ÞvT ¼ % e%Aðt%!Þ C%1 uð!Þd! (17) respectively, and JðxÞ is the Q ' Q Jacobian matrix whose
0 elements are given by
The expansion in (19) suggests that given xk , the for m 2 f1; . . . ; Mg. Here, hm * tm % tm%1 , Fm ð&Þ is a
estimate generated by the kth step of an iterative algo- nonlinear function which maps an MN-length vector to
rithm, it is possible to improve this estimate by solving the an N-length vector and represents the jth backward-Euler
linear system timestep equation, and periodicity is invoked to replace
^vðt0 Þ with ^vðtM Þ in the j ¼ 1 equation.
The system of equations is diagrammed in Fig. 5.
Jðxk Þðxkþ1 % xk Þ ¼ %Fðxk Þ (21) It is perhaps informative to rewrite (24) in matrix
form, as in
The Kronecker product notation makes it possible to difference formula in (28), Dbe is replaced by Dbd2
summarize the backward-Euler M-step periodic discretiza- where
tion with an M ' M differentiation matrix
and note also that $mj will be independent of m if all the Now suppose the capacitance approaches zero, then the
timesteps are equal. To substitute a two-step backward- matrix in (35) takes on a curious property. If the number of
interpolation condition implies that there exists a set of K (46) to be orthogonal to each of the basis functions. Such
timepoints, t1 ; . . . tK , such that the K ' K matrix methods are referred to as Galerkin methods [7], [44], [46]
and have played an important role in the development of
2 3 periodic steady-state methods for circuits though they are
%1 ðt1 Þ ... %K ðt1 Þ not the focus in this paper.
6 .. .. 7
"%1 *4 . . 5 (42) If the number of collocation points and the number of
%1 ðtK Þ . . . %K ðtK Þ basis functions are equal, M ¼ K, and the basis set satisfies
the interpolation condition mentioned above with an
M ' M interpolation matrix ", then (47) can be recast
is nonsingular and therefore the basis function coefficients using the Kronecker notation and paralleling (28) as
can be uniquely determined from the sample points using
%_ 1 ðt1 Þ . . . %_ K ðt1 Þ
2 3
To use basis functions to solve (9), consider expanding 6 . .. 7
qðvðtÞÞ in (9) as "_ %1 * 4 .. . 5: (49)
%_ 1 ðtK Þ . . . %_ K ðtK Þ
K
X
qðvðtÞÞ , Q½k)%k ðtÞ (44) By analogy to (28), the product "_ %1 " in (48) can be
k¼1 denoted as a basis function associated differentiation
matrix Dbda
where Q½k) is the N-length vector of weights for the kth
basis function.
Dbda ¼ "_ %1 " (50)
Substituting (44) in (3) yields
K
! and (48) becomes identical in form to (28)
d X
Q½k)%k ðtÞ þ iðvðtÞÞ þ uðtÞ , 0: (45)
dt k¼1
Fð^vÞ ¼ Dbda . IN qð^
v Þ þ ið^
v Þ þ u ¼ 0: (51)
1) Fourier Basis Example: If a sinusoid is the input to a fast Fourier transform and its inverse. In addition, "_ %1
F ,
system of differential equations generated by a linear time- representing the time derivative of the series representa-
invariant circuit, then the associated periodic steady-state tion, is given by
solution will be a scaled and phase-shifted sinusoid of the
same frequency. For a mildly nonlinear circuit with a
sinusoidal input, the solution is often accurately repre- "_ %1 %1
F ¼ "F # (57)
sented by a sinusoid and a few of its harmonics. This
observation suggests that a truncated Fourier series will be
where # is the diagonal matrix given by
an efficient basis set for solving periodic steady-state
problems of mildly nonlinear circuits.
To begin, any square integrable T-periodic waveform 2
j2&fK
3
xðtÞ can be represented as a Fourier series 6 j2&fK%1 7
#*6 .. .. 7: (58)
6 7
4 . . 5
¼1
kX j2&f%K
xðtÞ ¼ X½k)ej2&fk t (53)
k¼%1
The Fourier basis collocation method generates a
system of equations of the form (51), where
where fk ¼ kt=T and
k¼K
X
x^ðtÞ ¼ ^ j2&fk t
X½k)e (55)
k¼%K
k¼K
d X
^ j2&fk t
x^ðtÞ ¼ X½k)j2&f ke : (56)
dt k¼%K
The connection between spectral differentiation and Then, the difference between the initial and final states is
standard differencing schemes can be exploited when used to correct the initial state, and the method shoots
developing fast methods for computing solutions to (28) forward another period. As is commonly noted in the
and (51), a point we will return to subsequently. numerical analysis literature, this shooting procedure will
The error analysis of spectral-collocation methods can be disasteriously ineffective if the first guess at a periodic
be found in [17] and [19]. Also, many implementations of steady state excites rapidly growing unstable behavior in
harmonic balance in circuit simulators use spectral the nonlinear system [18], but this is rarely an issue for
Galerkin methods rather than collocation schemes [7], circuit simulation. Circuits with such unstable Bmodes[
and if a small number of harmonics are used, Galerkin are unlikely to be useful in practice, and most circuit
spectral methods can have superior accuracy and often designers using periodic steady-state analysis have already
lead to nonlinear systems of equations that are more easily verified that their designs are quite stable.
solved with Newton’s method [47]. The state correction needed for the shooting method
can be performed with Newton’s method applied to (64),
D. Shooting Methods in which case the correction equation becomes
The numerical procedure for solving the state transi-
tion function based periodic steady-state formulation in
IN % J!^ ^vk ðtM Þ; T ^vkþ1 ðtM Þ % ^vk ðtM Þ ¼ %Fsh ^vk ðtM Þ
( & ')( ) & '
(11) is most easily derived for a specific discretization
scheme and then generalized. Once again, consider (65)
applying the simple backward-Euler algorithm to (3).
Given any ^vðt0 Þ, the nonlinear equation
where k is the Newton iteration index, IN is N ' N identity
matrix, and
qð^vðt1 ÞÞ % qð^vðt0 ÞÞ
þ ið^vðt1 ÞÞ þ uðt1 Þ ¼ 0 (61)
h1
d ^
J!^ ðv; TÞ * !ðv; TÞ (66)
can be solved, presumably using a multidimensional dv
Newton method, for ^vðt1 Þ. Then, ^vðt1 Þ can be used to solve
is referred to as discretized sensitivity matrix.
qð^vðt2 ÞÞ % qð^vðt1 ÞÞ To complete the description of the shooting-Newton
þ ið^vðt2 ÞÞ þ uðt2 Þ ¼ 0 (62) method, it is necessary to present the procedure for com-
h2 ^ TÞ and J ^ ðv; TÞ. As mentioned above, computing
puting !ðv; !
the approximate state transition function is equivalent to
for ^vðt2 Þ. This procedure can be continued, effectively solving the backward-Euler equations as in (24) one time-
integrating the differential equation one timestep at a time step at a time. Solving the backward-Euler equations is
until ^vðtm Þ has been computed. And since the nonlinear usually accomplished using an inner Newton iteration, as in
equations are solved at each timestep, ^vðtm Þ is an implicitly
defined algebraic function of ^vðt0 Þ. This implicitly defined $ %*
function is a numerical approximation to the state- Cmkl + 1
þ Gmkl ^vk;ðlþ1Þ ðtm Þ % ^vk;l ðtm Þ ¼ %
transition function !ð&Þ described in the previous section. hm hm
That is, ' q ^vk;l ðtm Þ % q ^vk;l ðtm Þ % i ^vk;l ðtm Þ % uðtm Þ (67)
& & ' & '' & '
^ ð^vðt0 Þ; tm Þ , !ð^vðt0 Þ; tm Þ:
^vðtm Þ ¼ ! (63) where m is the timestep index, k is the shooting-Newton
iteration index, l is the inner Newton iteration index,
Cmkl ¼ ðdqð^vk;l ðtm ÞÞ=dv and Gmkl ¼ ðdið^vk;l ðtm ÞÞ=dv. Some-
The discretized version of the state-transition function-
times, there are just too many indices.
based periodic steady-state formulation is then
To see how to compute J!^ ðv; TÞ as a by-product of the
Newton method in (67), let l ¼ + denote the inner Newton
^ ð^vðtM Þ; tM Þ ¼ 0: iteration index which achieves sufficent convergence and
Fð^vðtm ÞÞ * ^vðtM Þ % ! (64)
let ^vk;+ ðtm Þ denote the associated inner Newton converged
solution. Using this notation
Using (64) to compute a steady-state solution is often
referred to as a shooting method, in which one guesses a
periodic steady state and then shoots forward one period qð^vk;+ ðtm ÞÞ% qð^vk;+ ðtm%1 ÞÞ & k;+
þ i ^v ðtm Þ þ uðtm Þ ¼ 'm
'
(68)
with the hope of arriving close to the guessed initial state. hm
where the left-hand side of (68) is almost zero, so that 'm is mating the matrix exponential in (73) and the convolution
bounded by the inner Newton convergence tolerance. integral in (74).
Implicitly differentiating (68) with respect to v, and
assuming that 'm is independent of v, results in 2) Comparing to Finite-Difference Methods: If Newton’s
method is used for both the shooting method, as in (65),
% k;+ and for the finite difference method, in which case the
d^v ðtm Þ Cðm%1Þk+ d^vk;+ ðtm%1 Þ
$
Cmk+ Jacobian is (40), there appears to be an advantage for the
þ Gmk+ ¼ (69)
hm dv hm dv shooting-Newton method. The shooting-Newton method
is being used to solve a system of N nonlinear equations,
whereas the finite-difference-Newton method is being
where it is usually the case that the matrices Cmk+ =hm and used to solve an NM system of nonlinear equations. This
Gmk+ are available as a by-product of the inner Newton advantage is not as significant as it seems, primarily
iteration. because computing the sensitivity matrix according to (70)
Recursively applying (69), with the initial value is more expensive than computing the finite-difference
^vk;+ ðt0 Þ ¼ v, yields a product form for the Jacobian Jacobian. In this section, we examine the backward-Euler
discretized equations to show that solving a system the
%%1 shooting method Jacobian, as in (65), is nearly equivalent
M $
Y Cmk+ Cðm%1Þk+ to solving a preconditioned system involving the finite-
J!^ ðv; tM Þ ¼ þ Gmk+ (70)
m¼1
hm hm difference method Jacobian, in (40).
To start, let L be the NM ' NM the block lower
bidiagonal matrix given by
where the notation M
Q
m¼1 indicates the M-term product
rather than a sum [15]. 2 C1 3
þ G1
h1
1) Linear Example: If a fixed timestep backward-Euler 6 % C1 C2
þ G2 7
6 h2 h2 7
discretization scheme is applied to (4), then L*6
6 .. ..
7
7 (75)
4 . . 5
% ChM%1
M
CM
hM þ GM
C
ð^vðtm Þ % ^vðtm%1 ÞÞ þ Gvðtm Þ þ uðtm Þ ¼ 0 (71)
h
and define B as the NM ' NM matrix with a single
nonzero block
for m 2 f1; . . . ; Mg, where periodicity implies that
^vðtM Þ ¼ ^vðt0 Þ. Solving for ^vðtM Þ yields a linear system
of equations CM
0 ... 0
2 3
h1
6 0 7
B*6 .. .. 7 (76)
½IN % J! )^vðtM Þ ¼ b (72) 4
. .
5
0 0
It is interesting to compare (72) and (74) to (16), note how though L%1 would never be explicitly computed. Here, INM
the fixed-timestep backward-Euler algorithm is approxi- is the NM ' NM identity matrix.
Fig. 7. Diagram of solving backward-Euler discretized finite-difference-Newton iteration equation (top picture) and solving shooting-Newton
iteration equation (bottom picture). Note that for shooting method the RHS contribution at intermediate timesteps is zero, due to the inner
newton iteration.
Examining (77) reveals an important feature, that L%1 B An alternative interpretation of the connection be-
is an NM ' NM matrix whose only nonzero entries are in tween shooting-Newton and finite-difference-Newton
the last N columns. Specifically, methods is to view the shooting-Newton method as a
two-level Newton method [10] for solving (24). In the
shooting-Newton method, if ^vðtm Þ is computed by using an
IN 0 0 ... 0 %P1
2 3
inner Newton method to solve the nonlinear equation Fm
6 0 IN 0 ... 0 %P2 7 at each timestep, starting with ^vðto Þ ¼ ^vðtM Þ, or equiva-
6 . .. .. .. .. .. 7
6 .
6 . . . . . .7
7 ^ vðtM Þ; tm Þ, then
lently if ^vðtm Þ is computed by evaluating !ð^
%1
ðINM % L BÞ ¼ 6
6 ... .. .. .. 7 Fi ð^vÞ in (24) will be nearly zero for i 2 f1; . . . ; M % 1g,
. . . 0 %PM%2 7 ^ vðtM Þ; tM Þ
6
6 . .. .. ..
7 regardless of the choice of ^vðtM Þ. Of course, !ð^
4 ..
7
. . . IN %PM%1 5 will not necessarily be equal to ^vðtM Þ; therefore, FM ð^vÞ will
0 ... ... . . . 0 I N % PM not be zero, unless ^vðtM Þ ¼ ^vðt0 Þ is the right initial
(78) condition to produce the periodic steady state. In the
shooting-Newton method, an outer Newton method is used
to correct ^vðtM Þ. The difference between the two methods
where the N ' N matrix PM is the NðM % 1Þ þ 1 through can then be characterized by differences in inputs to the
NM rows of the last N columns of L%1 B. This bordered-block linearized systems, as diagrammed in Fig. 7.
diagonal form implies that v~kþ1 % v~k in (77) can be com-
puted in three steps. The first step is to compute PM , the 3) More General Shooting Methods: Many finite-
second step is to use the computed PM to determine the last difference methods can be converted to shooting methods,
N entries in v~kþ1 % v~k . The last step in solving (77) is to but only if the underlying time discretization scheme
compute the rest of v~kþ1 % v~k by backsolving with L. treats the periodicity constraint by Bwrappping around[ a
The close relation between solving (77) and (65) can single final timepoint. For discretization schemes which
now be easily established. If L and B are formed using Cmk+ satisfy this constraint, the M ' M periodic differentiation
and Gmk+ as defined in (69), then by explicitly computing matrix, D, is lower triangular except for a single entry in
L%1 B it can be shown that the shooting method Jacobian the upper right-hand corner. As an example, the differ-
J! ð^vk ðt0 Þ; tM Þ is equal to PM . The importance of this entiation matrix Dbe in (27) is lower bidiagonal except
observation is that solving the shooting-Newton update for a single entry in the upper right-hand corner. To be
equation is nearly computationally equivalent to solving a more precise, if
preconditioned finite-difference-Newton update (77). The
comparison can be made precise if qðvÞ and iðvÞ are linear,
so that the C and G matrices are independent of v, then the Di;j ¼ 0; j 9 i; i 6¼ 1; j 6¼ M (79)
^vkþ1 ðtM Þ % ^vk ðtM Þ produced by the kth iteration of New-
ton’s method applied to the finite-difference formulation
will be identical to the ^vkþ1 ðtM Þ % ^vk ðtM Þ produced by consider separating D into its lower triangular part, DL ,
solving (65). and a single entry D1;M . The shooting method can then
DL . IN qð^
v Þ þ ið^
v Þ þ u þ ðD1;M~
e1 Þ . qðvT Þ ¼ 0 (82)
M "$ %%1 #
Y Cm Cm
Jshoot ¼ IN % þ Gm (85)
m¼1
hm hm
where ~e1 is the M-length vector of zeros except for unity
in the first entry. Perhaps the last Kronecker term
generating an NM-length vector in (82) suggests that the and is the difference between the identity matrix and a
authors have become carried away and overused the product of M N ' N matrices, each of which must be
Kronecker product, as computed as the product of a scaled capacitance matrix
and the inverse of a weighted sum of a capacitance and a
conductance matrix. The NM ' NM finite-difference or
D1;M qðvT Þ
2 3
basis-collocation Jacobians, denoted generally as Jfdbc , are
6 0 7
also implicitly defined. In particular
ðD1;M~
e1 Þ . qðvT Þ ¼ 6 .. 7 (83)
4 . 5
0
Jfdbc ¼ ðD . IN ÞC þ G (86)
When combined with good preconditioners, to be discussed convergence can be accelerated by replacing the original
subsequently, Krylov subspace methods reliably solve problem with a preconditioned problem
linear systems of equations but do not require an explicit
system matrix. Instead, Krylov subspace methods only
require matrix-vector products, and such products can be PAX ¼ Pb (87)
computed efficiently using the implicit forms of the
Jacobians given above.
Claiming that Krylov subspace methods do not require where A and b are the original system’s matrix and right-
explicit system matrices is somewhat misleading, as these hand side, and P is the preconditioner. Obviously, the
methods converge quite slowly without a good precondi- preconditioner that best accelerates the Krylov subspace
tioner, and preconditioners often require that at least some method is P ¼ A%1 , but were such a preconditioner avail-
of the system matrix be explicitly represented. In the able, no iterative method would be needed.
following section, we describe one of the Krylov subspace
methods, GMRES, and the idea of preconditioning. In the B. Fast Shooting Methods
subsections that follow we address the issue of precondi- Applying GMRES to solving the backward-Euler
tioning, first demonstrating why Krylov subspace methods discretized shooting Newton iteration equation is straight-
converge rapidly for shooting methods even without forward, as multiplying by the shooting method Jacobian in
preconditioning, then describing lower triangular and (85) can be accomplished using the simple M-step
averaging based preconditioning for basis-collocation and algorithm as follows.
finite-difference methods, drawing connections to shoot-
ing methods when informative. Computing pk ¼ Jshoot pk%1
Initialize ptemp ¼ pk%1
A. Krylov Subspace Methods For k ¼ 1 to M {
Krylov subspace methods are the most commonly used Solve ððCm =hm Þ þ Gm Þpk ¼ ðCm =hm Þptemp
iterative technique for solving Newton iteration equations Set ptemp ¼ pk
for periodic steady-state solvers. The two main reasons for }
the popularity of this class of iterative methods are that Finalize pk ¼ pk%1 % pk
only matrix-vector products are required, avoiding explic-
itly forming the system matrix, and that convergence is The N ' N Cm and Gm matrices are typically quite
rapid when preconditioned effectively. sparse, so each of the M matrix solutions required in the
As an example of a Krylov subspace method, consider above algorithm require roughly order N operations, where
the generalized minimum residual algorithm, GMRES [9]. orderð&Þ is used informally here to imply proportional
A simplified version of GMRES applied to solving a generic growth. Given the order N cost of the matrix solution in
problem is given as follows. this case, computing the entire matrix-vector product
requires order MN operations.
GMRES Algorithm for Solving Ax ¼ b Note that the above algorithm can also be used N times
Guess at a solution, x0 . to compute an explicit representation of Jshoot , generating
Initialize the search direction p0 ¼ b % Ax0 . the explicit matrix one column at a time. To compute the
Set k ¼ 1. ith column, set pk%1 ¼ ~ ei , where ~
ei can be thought of as the
do { ith unit vector or the ith column of the N ' N identity
Compute the new search direction,
Pk%1 pk ¼ Apk%1 . matrix. Using this method, the cost of computing an
Orthogonalize, p ¼ p % j¼0 (k;j pj .
k k
explicit representation of the shooting method Jacobian
Choose $k in xk ¼ xk%1 þ $k pk requires order MN 2 operations, which roughly equals the
to minimize krk k ¼ kb % Axk k. cost of performing N GMRES iterations.
If krk k G tolerancegmres , return vk as the solution. When applied to solving the shooting-Newton iteration
else Set k ¼ k þ 1. equation, GMRES and other Krylov subspace methods
} converge surprisingly rapidly without preconditioning and
typically require many fewer than N iterations to achieve
Krylov subspace methods converge rapidly when applied sufficient accuracy. This observation, combined with the
to matrices which are not too nonnormal and whose above analysis, implies that using a matrix-implicit
eigenvalues are contained in a small number of tight clusters GMRES algorithm to compute the shooting-Newton up-
[44]. Therefore, Krylov subspace methods converge rapidly date will be far faster than computing the explicit shooting-
when applied to matrices which are small perturbations from Newton Jacobian.
the identity matrix, but can, as an example, converge In order to develop some insight as to why GMRES
remarkably slowly when applied to a diagonal matrix whose converges so rapidly when applied to matrices like the
diagonal entries are widely dispersed. In many cases, shooting method Jacobian in (85), we consider the
linear problem (4) and assume the C matrix is invertible GMRES algorithm is plotted in Fig. 9 for solving the
so that A ¼ C%1 G is defined as in (15). For this linear periodic steady-state equation for the RC line with three
case and a steady-state period T, the shooting method different periods. The fastest converging case is when the
Jacobian is approximately period is 10 000 s, as only a few time constants are larger
than the period. The convergence of GMRES is slower
when the period is 1000, as more of the time constants are
Jshoot , I % e%AT : (88) larger than the period. And as one might have predicted,
the GMRES algorithm requires dozens of iterations when
the period is 100, as now there are then dozens of time
Since eigenvalues are continuous functions of matrix
constants much longer than the period.
elements, any eigenproperties of I % e%AT will roughly
hold for Jshoot , provided the discretization is accurate. In
1) Results: In this section, we present experimental
particular, using the properties of the matrix exponential
results for the performance of three methods for solving
implies
the shooting-Newton update equations: direct factoriza-
tion or Gaussian elimination, explicit GMRES, and matrix-
eigðJshoot Þ , eigðI % e%AT Þ ¼ 1%e%)i T i 2 1; . . . n (89) implicit GMRES.
Table 1 contains a comparison of the performance of
the three equation solvers in an implementation of a
where )i is the ith eigenvalue of A, or in circuit terms, the shooting-Newton method in a prototype circuit simulator.
inverse of the ith time constant. Therefore, if all but a few The test examples includes: xtal, a crystal filter; mixer is a
of the circuit time constants are smaller than the steady- small GaAs mixer; dbmixer is a double balanced mixer;
state period T, then e%)i T / 1 and the eigenvalues of Jshoot lmixer is a large bipolar mixer; cheby is an active filter; and
will be tightly clustered near one. That there might be a scf is a relatively large switched capacitor filter. The second
few Boutliers[ has a limited impact on Krylov subspace column in Table 1 lists the number of equations in each
method convergence. circuit. The third column represents the number of one-
As a demonstration example, consider applying period transient analyses that were necessary to achieve
GMRES to solving the shooting method matrix associated steady state using the shooting-Newton method. The
a 500 node RC line circuit as in Fig. 3, with one farad fourth, fifth, and sixth columns represent, respectively, the
capacitors and 1-# resistors. The time constants for the time in seconds to achieve steady state using Gaussian
circuit vary from tenths of a second to nearly 30 000 s and elimination, explicit GMRES, and the matrix-implicit
are plotted in Fig. 8. The error versus iteration for the form. All the results were obtained on a HP712/80
Fig. 9. Error versus iteration for GMRES, ' for 100-s period, o for 1000-s period, and + for a 10 000-s period.
Table 1 Comparison of Different Shooting Method Schemes is a block lower triangular matrix whose inverse is easily
applied. Using the inverse of (95) as a preconditioner
yields
'%1& U
Jprecond ¼ INM þ ðDL .IN ÞCþG
& '
ðD .IN ÞCþG : (96)
preconditioned system will have a bordered block where ) is the M ' M diagonal matrix of eigenvalues and S
diagonal form is the M ' M matrix of eigenvectors [41]. Using the
eigendecompostion of D in (99) leads to
IN 0 0 ... 0 %P1
2 3
ðS)S%1 Þ . C þ ðSIM S%1 Þ . G v^ ¼ u:
& '
6 0 IN 0 ... 0 %P2 7 (101)
6 . .. .. .. .. .. 7
6 .
6 . . . . . .7
7
Jprecond ¼6
6 ... .. .. .. 7 (97)
6 . . . 0 %PM%2 77 Using the Kronecker product property [29]
6 . .. .. ..
4 ..
7
. . . IN %P M%1
5
0 ... ... ... 0 IN % PM
ðABÞ . ðCDÞ ¼ ðA . CÞðB . DÞ (102)
ðS . IN Þð) . C þ IM . GÞðS%1 . IN Þ v^ ¼ u:
& '
'%1 & (103)
L U
& '
ðD . IN ÞC þ G ðD . IN ÞC þ G : (98)
ð)1 C þ GÞ%1
2 3
where D is the differentiation matrix for the selected 0 0 ... 0
method. Explicitly forming and factoring ðD . C þ IM . GÞ
6
6 0 ð)2 C þ GÞ%1 0 ... 0 7
7
6 .. 7
7:
can be quite computationally expensive, even though C and 6 .
..
6 7
G are typically extremely sparse. The problem is that the 6 7
4 . 5
differentation matrix D can be dense and the matrix will fill
0 0 ... 0 ð)M C þ GÞ%1
in substantially during sparse factorization.
A much faster algorithm for solving (99) which (107)
avoids explicitly forming ðD . C þ IM . GÞ can be de-
rived by making the modest assumption that D is diagon- To compute v^ using (106) requires a sequence of three
alizable as in matrix-vector multiplications. Multiplying by ðS%1 . IN Þ
and ðS . IN Þ each require NM2 operations. As (107) makes
clear, multiplying by ð) . C þ IM . GÞ%1 is M times the
D ¼ S)S%1 (100) cost of applying ð)1 C þ GÞ%1 , or roughly order MN
operations as C and G are typically very sparse. The which can be reorganized as
resulting solution algorithm therefore requires
ðD . IN ÞC ¼ ðD . IN ÞðIM . Cavg Þ þ ðD . IN Þ$C: (113)
orderðM3 Þ þ 2NM2 þ orderðMNÞ (108)
The first term on the right-hand side of (113) can be
operations. The first term in (108) is the cost of simplified using the reverse of the Kronecker property in
eigendecomposing D, the second term is the cost of (102). That is
multiplying by ðS%1 . IN Þ and ðS . IN Þ, and third term is
associated with the cost of factoring and solving with the
ðD . IN ÞðIM . Cavg Þ ¼ ðDIM . IN Cavg Þ ¼ ðD . Cavg Þ (114)
sparse matrices ð)m C þ GÞ. The constant factors associated
with the third term are large enough that it dominates
unless the number of timepoints, M, is quite large. where the last equality follows trivially from the fact that
It should be noted that if D is associated with a IM and IN are identity matrices. The result needed to derive
periodized fixed timestep multistep method, or with a (111) from (110) is then
basis-collocation method using Fourier series, then D will
be circulant. For circulant matrices, S and S%1 will be
equivalent to the discrete Fourier transform matrix and its ðD . IN ÞC ¼ ðD . Cavg Þ þ ðD . IN Þ$C: (115)
inverse [28]. In these special cases, multiplication by S and
S%1 can be performed in order MlogM operations using the
Though (111) is somewhat cumbersome to derive, its
forward and inverse fast Fourier transform, reducing the
form suggests preconditioning using ðD . Cavg þ IM .
computational cost of (106) to
Gavg Þ%1 , which, as shown above, is reasonbly inexpensive
to apply. The preconditioned Jacobian is then
orderðMlogMÞ þ orderðNMlogMÞ þ orderðMNÞ (109)
ðD . C þ IM . GÞ%1 Jfdbc ðvÞ ¼ I þ $DCG (116)
which is a substantial improvement only when the
number of discretization timepoints or basis functions is
where
quite large.
4) Extension to the Nonlinear Case: For finite-difference $DCG ¼ ðD . C þ IM . GÞ%1 ððD . IN Þ$C þ $GÞ: (117)
and basis collocation methods applied to nonlinear
problems, the Jacobian has the form
If the circuit is only mildly nonlinear, then $DCG will be
small, and the preconditioned Jacobian in (117) will close
Jfdbc ðvÞ ¼ ðD . IN ÞC þ G (110) to the identity matrix and have tightly clustered eigenva-
lues. As mentioned above, for such a case a Krylov-
subspace method will converge rapidly.
where C and G are as defined in (38) and (39). In order to
precondition Jfdbc , consider computing averages of the
5) Example Results: In this section, we present some
individual Cm and Gm matrices in the M diagonal blocks in
limited experimental results to both demonstrate the
C and G. Using these Cavg and Gavg matrices in (110) results
reduction in computation time that can be achieved using
in a decomposition
matrix-implicit iterative methods and to show the effec-
tiveness of the averaging preconditioner.
Jfdbc ðvÞ ¼ ðD . Cavg þ IM . Gavg Þ In Table 2, we compare the megaflops required for
þ ððD . IN Þ$C þ $GÞ (111) different methods to solve the linear system associated
with a Fourier basis-collocation scheme. We compare
Gaussian elimination (GE), preconditioned explicit
where $C ¼ C % ðIM . Cavg Þ and $G ¼ G % ðIM . Gavg Þ. GMRES (GMRES), and matrix implicit GMRES (MI).
To see how the terms involving Cavg and C in (111) were Megaflops rather than CPU time are computed because
derived from (110), consider a few intermediate steps. our implementations are not uniformly optimized. The
First, note that by the definition of $C example used to generate the table is a distortion analysis
of a 31-node CMOS operational transconductance ampli-
& ' fier. Note that for a 32 harmonic simulation, with 2232
ðD . IN ÞC ¼ ðD . IN Þ ðIM . Cavg Þ þ $C (112) unknowns, the matrix-implicit GMRES is more than ten
Table 2 Megaflops Required by Different Techniques Single-Frequency Table 3 Results From Single-Frequency Distortation Analysis of a
Distortation Analysis of a Nonlinear 31-Node Operational Nonlinear 31-Node Operational Transconductance Amplifier.
Transconductance Amplifier Note the Increase in Number of Iterations With Number
of Harmonics Without Preconditioning
REFERENCES circuits with multifrequency components,[ Int. J. Microwave Millimeter Wave Computer Aided
Circuit Theory Applic., vol. 15, 1987. Engineering, vol. 1, no. 1, 1991.
[1] L. Nagel and R. Rohrer, BComputer analysis
of nonlinear circuits, excluding radiation [5] P. Heikkilä, BObject-Oriented approach to [8] M. Okumura, H. Tanimoto, T. Itakura, and
(CANCER),[ IEEE J. Solid State Circuits, numerical circuit analysis,[ Ph.D. dissertation, T. Sugawara, BNumerical noise analysis for
Aug. 1971. Helsinki Univ. Technology, Helsinki, nonlinear circuits with a periodic large signal
Jan. 1992. excitation including cyclostationary noise
[2] L. Zadeh and C. Desoer, Linear System sources,[ IEEE Trans. Circuits SystemsVI
Theory. New York: McGraw-Hill, 1963. [6] K. Kundert, J. White, and A. Sangiovanni-
Vincentelli, Steady-State Methods for Fundamental Theory Applic., vol. 40, no. 9,
[3] L. Chua, C. Desoer, and E. Kuh, Linear and Simulating Analog and Microwave Circuits. pp. 581–590, Sep. 1993.
Nonlinear Circuits. New York: McGraw-Hill, Boston, MA: Kluwer, 1990. [9] Y. Saad and M. H. Schultz, BGMRES: A
1987. generalized minimal residual algorithm for
[7] R. Gilmore and M. Steer, BNonlinear
[4] A. Ushida, L. Chua, and T. Sugawara, BA circuit analysis using the method of solving nonsymmetric linear systems,[ SIAM
substitution algorithm for solving nonlinear harmonic balanceVA review of the art. J. Scientific Statistical Computing, vol. 7,
Part IVIntroductory concepts,[ Int. J. pp. 856–869, Jul. 1986.
[10] N. Rabbat, A. Sangiovanni-Vincentelli, and circuit simulation,[ in Proc. Int. Conf. [35] J. Chen, D. Feng, J. Phillips, and K. Kundert,
H. Hsieh, BA multilevel Newton algorithm Computer-Aided Design (ICCAD ’03), p. 251. BSimulation and modeling of intermodulation
with macromodeling and latency for the [23] H. G. Brachtendorf, G. Welsch, R. Laur, and distortion in communication circuits,[ in
analysis of large-scale non-linear circuits in A. Bunse-Gerstner, BNumerical steady state Proc. IEEE Custom Integrated Circuits Conf.,
the time domain,[ IEEE Trans. Circuits Syst., analysis of electronic circuits driven by May 1999.
vol. 9, no. 1979, pp. 733–741. multi-tone signals,[ Electrical Eng., vol. 79, [36] K. Kundert, BIntroduction to RF simulation
[11] R. Melville, P. Feldmann, and J. Roychowdhury, pp. 103–112, 1996. and its application,[ J. Solid-State Circuits,
BEfficient multi-tone distortion analysis of [24] J. Roychowdhury, BAnalyzing circuits with vol. 34, no. 9, Sep. 1999.
analog integrated circuits,[ in Proc. IEEE widely separated time scales using numerical [37] Y. Thodeson and K. Kundert, BParametric
Custom Integrated Circuits Conf., May 1995. PDE methods,[ IEEE Trans. Circuits Systems I: harmonic balance,[ in Proc. IEEE MTT-S Int.
[12] L. T. Watson, Appl. Math. Comput., vol. 5, Fundamental Theory Applic., vol. 48, no. 5, Microwave Symp. Dig., Jun. 1996.
no. 1979, pp. 297–311. pp. 578–594, 2001. [38] R. Telichevesky, K. S. Kundert, and
[13] R. Telichevesky, K. S. Kundert, and J. K. White, [25] S. Skelboe, BComputation of the periodic J. K. White, BEfficient AC and noise analysis
BEfficient steady-state analysis based on steady-state response of nonlinear networks of two-tone RF circuits,[ in Proc. 33rd Design
matrix-free Krylov-subspace methods,[ in Proc. by extrapolation methods,[ IEEE Trans. Cts. Automation Conf., Jun. 1996.
Design Automation Conf., Santa Clara, CA, Syst., vol. CAS-27, pp. 161–175, 1980. [39] J. Roychowdhury, D. Long, and P. Feldmann,
Jun. 1995. [26] V. Rizzoli and A. Neri, BState of the art and BCyclostationary noise analysis of large RF
[14] H. Keller, Numerical Solution of Two Point present trends in nonlinear microwave CAD circuits with multi-tone excitations,[ IEEE J.
Boundary-Value Problems. Philadelphia, PA: techniques,[ IEEE Trans. Microwave Theory Solid-State Circuits, Mar. 1998.
SIAM, 1976. Tech., vol. 36, no. 2, pp. 343–365, Feb. 1988. [40] R. Telichevesky, K. Kundert, I. El-Fadel, and
[15] T. J. Aprille and T. N. Trick, BSteady-state [27] R. W. Freund, G. H. Golub, and N. M. J. White, BFast simulation algorithms for RF
analysis of nonlinear circuits with periodic Nachtigal, BIterative solution of linear circuits,[ in Proc. IEEE Custom Integrated
inputs,[ Proc. IEEE, vol. 60, no. 1, systems,[ Acta Numerica, pp. 57–100, 1991. Circuits Conf., May 1996.
pp. 108–114, Jan. 1972. [28] C. V. Loan, Computational Frameworks for the [41] J. Phillips and B. Yang, BA multi-interval
[16] J. P. Boyd, Chebyshev and Fourier Spectral Fast Fourier Transform. Philadelphia, PA: Chebyshev collocation method for efficient
Methods. New York: Springer-Verlag, 1989. SIAM, 1992. high-accuracy RF circuit simulation,[ in
[17] D. Gottlieb and S. Orszag, Numerical Analysis [29] L. L. Whitcomb, Notes on Kronecker Products. Proc. 37th ACM/IEEE Design Automation Conf.
of Spectral Methods: Theory and Applications. [Online]. Available: spray.me.jhu.edu/ (DAC 2000), Los Angeles, CA, Jun. 2000,
Philadelphia, PA: SIAM, 1977. llw/courses/me530647/kron_1.pdf. pp. 178–183.
[18] J. Stoer and R. Burlisch, Introduction to [30] P. Feldmann, R. C. Melville, and D. Long, [42] K. Mayaram, D. C. Lee, S. Moinian, D. Rich,
Numerical Analysis. New York: Springer. BEfficient frequency domain analysis of large and J. Roychowdhury, BComputer-aided
nonlinear analog circuits,[ in Proc. IEEE CICC, circuit analysis tools for RFIC simulation:
[19] C. Canuto, M. Y. Hussaini, A. Quarteroni, and
May 1996. Algorithms, features, and limitations,[ IEEE
T. A. Zang, Spectral Methods in Fluid
Trans. CAS II, pp. 274–286, Apr. 2000.
Mechanics. New York: Springer-Verlag, [31] K. Kundert, J. White, and A. Sangiovanni-
1987. Vincentelli, BA mixed frequency-time [43] L. N. Trefethen, Spectral Methods in Matlab.
approach for distortion analysis of switching Philadelphia, PA: SIAM, Jun. 2000.
[20] B. Troyanovsky, Z. Yu, L. So, and R. Dutton,
BRelaxation-based harmonic balance filter circuits,[ IEEE J. Solid-State Cts., vol. 24, [44] L. N. Trefethen and D. Bau, III, Numerical
technique for semiconductor device no. 2, pp. 443–451, Apr. 1989. Linear Algebra. Philadelphia, PA: SIAM,
simulation,[ in Proc. Int. Conf. Computer-Aided [32] C. W. Gear, Numerical Initial Value Problem in 1997.
Design, Santa Clara, CA, Nov. 1995. Ordinary Differential Equations. Englewood [45] N. Soveiko and M. Nakhla, BWavelet
[21] M. M. Gourary, S. G. Rusakov, S. L. Ulyanov, Cliffs, NJ: Prentice-Hall, 1971. harmonic balance,[ IEEE Microwave Wireless
M. M. Zharov, K. Gullapalli, and [33] K. S. Kundert, The Designer’s Guide to SPICE Components Lett., Jul. 2003.
B. J. Mulvaney, BThe enhancing of efficiency and Spectre. Boston, MA: Kluwer, 1995. [46] O. J. Nastov and J. White, BTime-mapped
of the harmonic balance analysis by adaptation [34] D. Feng, J. Phillips, K. Nabors, K. Kundert, harmonic balance,[ in Proc. IEEE Design
of preconditioner to circuit nonlinearity,[ in and J. White, BEfficient computation of quasi- Automation Conf., pp. 641–646.
Asia and South Pacific Design Automation Conf. periodic circuit operating conditions via a [47] O. J. Nastov, BSpectral methods for circuit
(ASP-DAC’00), p. 537. mixed frequency/time approach,[ in Proc. analysis,[ Ph.D. dissertation, Massachusetts
[22] F. Veerse, BEfficient iterative time 36th Design Automation Conf., Jun. 1999. Inst. Technol., Cambridge, 1999.
preconditioners for harmonic balance RF
Ognen Nastov, photograph and biography not available at the time of Ken Kundert received the B.S. degree in 1979, the
publication. M.S. degree in 1983, and the Ph.D. degree in
electrical engineering from the University of
California, Berkeley, in 1989.
He is renowned for creating two of the most
Rircardo Telichevesky (Member, IEEE) received innovative, influential, and highest grossing circuit
the B.S. degree in electrical engineering from the simulators ever produced: Cadence’s Spectre and
Universidade Federal do Rio Grande do Sul, Brazil, Agilent’s harmonic balance simulator. He has a
in 1985, the M.S. degree in electrical engineering deep understanding of analog/mixed-signal and
from the Technion, Israel, in 1988, and the Ph.D. RF simulation technology and is well versed in its
degree in electrical engineering and computer application. He Co-founded Designer’s Guide Consulting, Inc., in 2005.
science from the Massachusetts Institute of Tech- From 1989 to 2005, he worked at Cadence Design Systems as a Fellow. He
nology, Cambridge, in 1994. created Spectre and was the Principal Architect of the Spectre circuit
He specialized in computer architectures and simulation family. As such, he has led the development of Spectre,
numerical simulation. He is the Principal Re- SpectreHDL, and SpectreRF. He also played a key role in the development
searcher at Kineret Design Automation, Santa Clara, CA, working in the of Cadence’s AMS Designer and made substantial contributions to both
simulation of analog and mixed signal systems. He played a key role in the Verilog-AMS and VHDL-AMS languages. Before that , he was a Circuit
the development of Cadence’s SpectreRF and Spectre simulator. He also Designer at Tektronix and Hewlett-Packard and contributed to the design
worked in the modeling and real-time simulation of physics-based of the HP 8510 microwave network analyzer. He has written three books
mechanical systems for virtual environments at Xulu Entertainment, Inc. on circuit simulation: The Designer’s Guide to Verilog-AMS in 2004, The
His current research interests include numerical algorithms and efficient Designer’s Guide to SPICE and Spectre in 1995, and Steady-State Methods
data structures for analog, RF, mixed signal, and system level simulation, for Simulating Analog and Microwave Circuits in 1990. He is the author of
and its interoperability with other tools in the design flow. 11 patents.