The Newton-Raphson Method and Its Application To Fixed Points
The Newton-Raphson Method and Its Application To Fixed Points
The Newton-Raphson algorithm is a numerical method for finding the roots of a function. It does
so by computing the Jacobian linearization of the function around an initial guess point, and using
this linearization to move closer to the nearest zero.
Consider a function H:RnRn that has a zero at x*, i.e. H(x*)=0. If we set y=H(x), we can
approximate changes in y, y, due to changes in x, x, by linearizing H around some point xi.
where J H (x i ) is the Jacobian of H at x*. We seek a zero of H, so we set y=-H(xi) and solve (1)
for x to obtain
!
"x = #J H (x i ) #1 H (x i ) (2)
!
Because (1) is a Jacobian linearization of H, (2) provides a change in x that that moves closer to
the desired zero. A second iteration of this process can now be performed at a point xi+1 such that
xi+1= xi+x. Using (2), this becomes
!
(3) can be used iteratively from some initial guess to yield a better and better approximation of
x*. The algorithm terminates once a value of x is reached such that H(x) is sufficiently close to
zero. !
The image below shows a graphical representation of one iteration of the method for a function
f:RR.
2. Newton-Raphson and Fixed Points
" = {(q, q ) # TQ G}
A fixed point is defined as a location on the Poincar plane, x*, such that x*=(x*). We can find
this value from some initial guess point xi by establishing an error function E(x)= (x)-x that
provides the difference between a state vector and the result of the forward integration from that
point along the phase trajectory. This error is zero for a fixed point, thus finding a fixed point
corresponds to finding the zero of E, which can be accomplished using the multidimensional form
of the Newton-Raphson algorithm.
If we start from an initial guess for the fixed point, xi, then (3) becomes
The task now is to determine the Jacobian matrix of the error function at the point xi. If the phase
space is a subset of Rn, then the error function E is a mapping E: Rn Rn, where the ith element in
E, Ei, defines the error of the ith!
state variable. The Jacobian for such a function is an nxn matrix
of the form
Note that each column contains the derivatives of E with respect to a particular state variable. The
derivatives contained in (5) can be computed by perturbing each state variable in xi in turn by a
small amount, , along!the Poncar plane, and evaluating the resulting vector with the flow . We
then apply the error function to the result. When is small, this process estimates the derivative
of the error with respect to the perturbed variable: "E /"x n. For instance, to compute the first
column of (5), we perturb the first state variable in our initial guess by a small amount, while
keeping the other variables the same:
!
# x1i + " 1 &
% (
i % x 2i (
x + dx1 =
% M (
% i (
$ xn '
We now evaluate the error function with this perturbed variable, giving us the change in E due to
the small change in xi
!
"E = E(x i + dx1 ) # E(x i ) = [$ (x i + dx1 ) # (x i + dx1 )] # [$ (x i ) # x i ] (6)
! "E $E
#
! " 1 $x i
which corresponds to the first column of the Jacobian matrix of E. This process can be repeated
for each state variable in xi, eventually yielding the matrix in (5).
!
With the Jacobian identified, (4) can be used to find the next guess for the fixed point. The loop
terminates once all the elements in the error function are below some small number >0
sufficiently close to zero, i.e. when
Note that convergence of (4) does not occur if at any time i, the error increases with the next
iteration, i+1.
!