App 1
App 1
v = T v, (1)
where v is the object for which we are seeking a solution and T is a mapping
from the set of values which v can possibly take “into” itself. For example,
say that v is a scalar element which can take values on the extended real
line [i.e., the real line with +∞ and −∞ added], and let T v = a + bv, .with
b = 1. Then T is a linear mapping, and in this case there actually exists a
closed form solution for v, namely
v = a + bv
a
⇒v = .
1−b
The mappings we usually consider in microeconomic applications are not
linear, unfortunately. We therefore need more general methods that either
serve to guarantee the existence and (possibly) uniqueness of solutions to [1],
and/or provide computational methods for solving these implicit functions.
We say that the metric space X is complete if for every convergent se-
quence {xn } such that
1
there exists an element x̂ ∈ X such that
T v = v.
lim ρ(T n x, v) = 0,
n→∞
2
For an arbitrary x ∈ X, consider ρ(T n+m x, T n x). Now
which implies
Table A.1
Method of Successive Approximation
In Table A.1 we have written the stopping rule for the algorithm in terms
of the absolute distance between the iterate vk+1 and the iterate vk . From
3
the contraction mapping theorem, we know that this distance monotonically
declines in the number of iterations, k. There are many possible stopping
rules to use however in deciding when we are “close enough” to the true
value of the fixed point to terminate the iterative procedure. When T is a
contraction, it is also possible to set a stopping rule that has the property
that the error after N(ε, v0 ) + 1 iterations is no larger than ε. when starting
from the initial value v0 . To find N(ε, v0 ) requires the following result.
We can use this result to set the number of iterations of T we will com-
pute in the following manner. Say that we are willing to tolerate a dis-
crepancy between the computed value of the fixed point, T n v0 , and the true
value, v, of ε > 0. Then given any starting point v0 , we will stop the iterative
procedure after iteration N(ε, v0 ) + 1, where
No matter what stopping rule one uses, when T is a contraction with modu-
lus ψ, it is always possible to use [2] to bound the size of the approximation
error.
In many cases, it is not possible to demonstrate that a particular map-
ping is a contraction, such as was the case in some of the linear functions we
saw in the examples. Even if T is a contraction, it may be the case that the
method of successive approximation converges slowly or “unevenly.” In all
these cases, often it is still possible to use a “modified” method of successive
approximation to solve for the fixed point Of course, if T is not a contrac-
tion, existence and uniqueness of a fixed point v is not guaranteed. For
4
the present, we assume that we have established the existence of a unique
equilibrium, so the problem is only the computation of it.
Let us now assume that T is not necessarily a contraction mapping, but
that there exists a unique fixed point of T which is denoted by v. Define
another map L as follows:
v = T v,
then
Lv = ζT v + (1 − ζ)v = v,
u = ζT u + (1 − ζ)u
⇒ ζu = ζT u
⇒ u = T u,
vn+1 = ζT vn + (1 − ζ)vn ,
1
See Judd (1997, Chapter ??) for a discussion of this point.
5
which is a convex combination (when ζ ∈ [0, 1]) of T vn and the original value
vn . When ζ = 1, we have the “classic” successive approximation algorithm
described in Table A.1. When ζ = 0, the algorithm remains forever stuck at
the initial value v0 , which is obviously an undesirable situation. Therefore,
in setting ζ the trade-offs are between instability and speed (conditional on
convergence) for high values of ζ versus slow but steady convergence for low
values of ζ. A value of ζ in the neighborhood of .3 often works well for the
types of fixed point problems considered in this monograph.
Bear in mind that when using a dampening factor ζ < 1, the stopping
rule used in deciding when to stop the iteration sequence should be modified
accordingly. Clearly, when ζ is low there will be relatively small changes
between the iterates vn and vn+1 for purely artificial reasons. If one is
using a criterion of the form |vn − vn+1 | < ε to decide when to stop with
the operator T, one might one to use |vn − vn+1 | < 10−2 ε when using the
operator L with ζ = .3. For lower values of ζ, one would want to use even
smaller values than 10−2 ε in the stopping rule.
Table A.2 contains an example of the method of successive approximation
and its “modified” form for the linear mappings we considered above. All
iteration sequences begin from the starting value v0 = 0. Column 1 contains
the iteration sequence for the mapping T v = −10 + .8v. We know that
this function is a contraction mapping, and therefore should converge to its
unique fixed point (v = T v) of -50 from any starting value. This is in fact
what we observe, with convergence to the fifth decimal point by iteration
70.
Columns 2 through 4 contain approximation sequences for the mapping
T v = −10 − 1.2v. While there is a unique fixed point for this map (equal
to -10/2.2), the map itself is not a contraction. We see that starting from
the point v0 = 0, when ζ = 1 (column 2) the algorithm diverges. This is
not the case in columns 3 and 4, where ζ was set to .2 and .8, respectively.
The best performance in this case was for ζ = .2, in this case convergence
was both rapid and “smooth.” Convergence was also obtained for ζ = .8,
but was noticeably slower. Of course, which ζ works best in any specific
problem will depend on the nature of the map and the starting value, and
cannot generally be determined except through trial and error.
6
Table A.2
Illustration of Method of Successive Approximation
(v0 = 0)
T v or Lv
Iteration −10 + .8v −10 − 1.2v ς(−10 − 1.2v) + (1 − ς)v
ς = .2 ς = .8