0% found this document useful (0 votes)
19 views7 pages

App 1

This document discusses methods for computing fixed points of mappings, including contraction mappings. Contraction mappings guarantee a unique fixed point and convergence of successive approximations. Even if a mapping is not a contraction, modified successive approximations can still find the fixed point if it exists uniquely.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views7 pages

App 1

This document discusses methods for computing fixed points of mappings, including contraction mappings. Contraction mappings guarantee a unique fixed point and convergence of successive approximations. Even if a mapping is not a contraction, modified successive approximations can still find the fixed point if it exists uniquely.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1 Appendix: The Computation of Fixed Points

In solving for the decision rules of agents in dynamic search-bargaining mod-


els, we typically will have to resort to iterative methods. The scalars or
functions which define the decision rules of agents are almost invariably of
the form

v = T v, (1)

where v is the object for which we are seeking a solution and T is a mapping
from the set of values which v can possibly take “into” itself. For example,
say that v is a scalar element which can take values on the extended real
line [i.e., the real line with +∞ and −∞ added], and let T v = a + bv, .with
b = 1. Then T is a linear mapping, and in this case there actually exists a
closed form solution for v, namely

v = a + bv
a
⇒v = .
1−b
The mappings we usually consider in microeconomic applications are not
linear, unfortunately. We therefore need more general methods that either
serve to guarantee the existence and (possibly) uniqueness of solutions to [1],
and/or provide computational methods for solving these implicit functions.

1.1 Contraction Mappings


Let X be a metric space which is equipped with a metric ρ. Then ρ(x, y) is
the distance between x and y, for x, y ∈ X. The distance function ρ has the
following properties:

1. ρ(x, y) = ρ(y, x) (Symmetry)

2. ρ(x, y) ≥ 0, with ρ(x, y) = 0 if and only if x = y.

3. ρ(x, z) ≤ ρ(x, z) + ρ(y, z) (Triangle Inequality)

We say that the metric space X is complete if for every convergent se-
quence {xn } such that

lim ρ(xn , xn+m ) = 0 for each m,


n→∞

1
there exists an element x̂ ∈ X such that

lim ρ(xn , x̂) = 0.


n→∞

We call x̂ = limn→∞ xn the limit point of the sequence {xn }. Completeness


requires that this limit point be a member of the space X.

Definition 1 An operator T which maps X into itself is called a contraction


mapping if for some ψ ∈ (0, 1)

ρ(T x, T y) ≤ ψρ(x, y) for all x, y ∈ X.

Example 2 Reconsider the linear mapping T v = a + bv, b = 1. Is this a


contraction? Since ρ(a+bv, a+bv  ) = ρ(bv, bv ), the value of a is irrelevant in
determining whether a+ bv is a contraction. Now since ρ(bv, bv ) ≤ ψρ(x, y)
for some ψ ∈ (0, 1) if and only if |b| < 1, this is the property required for
a + bv to be a contraction. Thus T v = −10000 + .98v is a contraction, while
T v = .2 − 1.3v is not.

It is extremely useful to establish that T is a contraction for at least two


practical reasons. Both are apparent from the following theorem.

Theorem 3 If X is a complete metric space and T a contraction mapping,


then there exists a unique v such that

T v = v.

Furthermore, for any x ∈ X,

lim ρ(T n x, v) = 0,
n→∞

where T 1 x = T x, T 2 x = T (T 1 x), ..., T n x = T (T n−1 x).

Proof. First we demonstrate uniqueness of the fixed point. If T u = u


and T v = v, then

ρ(u, v) = ρ(T u, T v) ≤ ψρ(u, v)


⇒ ρ(u, v) = 0,

so that the fixed point is unique.

2
For an arbitrary x ∈ X, consider ρ(T n+m x, T n x). Now

ρ(T n+m x, T n x) ≤ ψ n ρ(T m x, x)


≤ ψ n [ρ(T m x, T m−1 x) + ... + ρ(T x, x)]
≤ ψ n ρ(T x, x)[ψm+1 + ... + ψ + 1],

which implies

lim ρ(T n+m x, T n x) = 0.


n→∞

Since X is complete we know that v = limn→∞ T n x exists. T is a continuous


mapping, since if limn→∞ ρ(xn , x̂) = 0 then ρ(T xn , T x̂) ≤ ψρ(xn , x̂) which
has a limiting value of 0. Then

T v = T lim T n x = lim T n+1 x = v.


n→∞ n→∞

This theorem demonstrates first that if T is a contraction mapping, there


exists a unique solution in the complete metric space X. Furthermore, the
theorem provides a computational technique to determine the solution, a
technique referred to as successive approximation. The algorithm is as fol-
lows.

Table A.1
Method of Successive Approximation

Begin by setting C > 0, k = 0, and v0 .

1. Given vk , compute vk+1 = T vk


2. Compute Dk = ρ(vk+1 , vk )
3. If Dk ≤ C, v̂ = vk+1
If Dk > C, repeat steps (1) − (3).

In Table A.1 we have written the stopping rule for the algorithm in terms
of the absolute distance between the iterate vk+1 and the iterate vk . From

3
the contraction mapping theorem, we know that this distance monotonically
declines in the number of iterations, k. There are many possible stopping
rules to use however in deciding when we are “close enough” to the true
value of the fixed point to terminate the iterative procedure. When T is a
contraction, it is also possible to set a stopping rule that has the property
that the error after N(ε, v0 ) + 1 iterations is no larger than ε. when starting
from the initial value v0 . To find N(ε, v0 ) requires the following result.

Theorem 4 If T is a contraction mapping, then

ρ(T n v0 , v) ≤ (1 − ψ)−1 ρ(T n v0 , T n+1 v0 ), ∀v0 ∈ X, (2)

where ψ is the modulus of the operator T.

Proof. Since limn→∞ ρ(T n v0 , v) = 0, we have limm→∞ ρ(T n v0 T n+m v0 ) =


ρ(T n v0 , v). Now

lim ρ(T n v0 , T n+m v0 ) ≤ ρ(T n v0 , T n+1 v0 ) + ρ(T n+1 v0 , T n+2 v0 ) + ...


m→∞
≤ (1 + ψ + ...)ρ(T n v0 , T n+1 v0 )
≤ (1 − ψ)−1 ρ(T n v0 , T n+1 v0 )

We can use this result to set the number of iterations of T we will com-
pute in the following manner. Say that we are willing to tolerate a dis-
crepancy between the computed value of the fixed point, T n v0 , and the true
value, v, of ε > 0. Then given any starting point v0 , we will stop the iterative
procedure after iteration N(ε, v0 ) + 1, where

ε(1 − ψ) ≤ ρ(T N (ε,v0 ) v0 , T N (ε,v0 )+1 v0 )


ε(1 − ψ) > ρ(T N (ε,v0 )−1 v0 , T N (ε,v0 ) v0 ).

No matter what stopping rule one uses, when T is a contraction with modu-
lus ψ, it is always possible to use [2] to bound the size of the approximation
error.
In many cases, it is not possible to demonstrate that a particular map-
ping is a contraction, such as was the case in some of the linear functions we
saw in the examples. Even if T is a contraction, it may be the case that the
method of successive approximation converges slowly or “unevenly.” In all
these cases, often it is still possible to use a “modified” method of successive
approximation to solve for the fixed point Of course, if T is not a contrac-
tion, existence and uniqueness of a fixed point v is not guaranteed. For

4
the present, we assume that we have established the existence of a unique
equilibrium, so the problem is only the computation of it.
Let us now assume that T is not necessarily a contraction mapping, but
that there exists a unique fixed point of T which is denoted by v. Define
another map L as follows:

Lv = ζT v + (1 − ζ)v, ζ ∈ [0, 1]. (3)

Note the following.

Proposition 5 If T possesses a unique fixed point v, v is also the unique


fixed point of L.

Proof. First we show that v is a fixed point of L. Since

v = T v,

then

Lv = ζT v + (1 − ζ)v = v,

so v is also a fixed point of L. Uniqueness is easily established by noting


that if u was another fixed point of L, then

u = ζT u + (1 − ζ)u
⇒ ζu = ζT u
⇒ u = T u,

which is a contradiction. Thus L has the same unique fixed point as T.


Note that the scalar parameter ζ doesn’t have to belong to the interval
[0, 1] for this argument to go through; in fact it can be anything.1 The
reason we have restricted it to the unit interval is because of the nature of
the fixed point problems we typically confront in solving stationary search
models. In most cases, we want to “dampen” the oscillations that occur
between iterations of the value function. Let vn denote the iterated value
after n iterations (starting from some initial point v0 ) Then the value of
vn+1 using the map L is

vn+1 = ζT vn + (1 − ζ)vn ,
1
See Judd (1997, Chapter ??) for a discussion of this point.

5
which is a convex combination (when ζ ∈ [0, 1]) of T vn and the original value
vn . When ζ = 1, we have the “classic” successive approximation algorithm
described in Table A.1. When ζ = 0, the algorithm remains forever stuck at
the initial value v0 , which is obviously an undesirable situation. Therefore,
in setting ζ the trade-offs are between instability and speed (conditional on
convergence) for high values of ζ versus slow but steady convergence for low
values of ζ. A value of ζ in the neighborhood of .3 often works well for the
types of fixed point problems considered in this monograph.
Bear in mind that when using a dampening factor ζ < 1, the stopping
rule used in deciding when to stop the iteration sequence should be modified
accordingly. Clearly, when ζ is low there will be relatively small changes
between the iterates vn and vn+1 for purely artificial reasons. If one is
using a criterion of the form |vn − vn+1 | < ε to decide when to stop with
the operator T, one might one to use |vn − vn+1 | < 10−2 ε when using the
operator L with ζ = .3. For lower values of ζ, one would want to use even
smaller values than 10−2 ε in the stopping rule.
Table A.2 contains an example of the method of successive approximation
and its “modified” form for the linear mappings we considered above. All
iteration sequences begin from the starting value v0 = 0. Column 1 contains
the iteration sequence for the mapping T v = −10 + .8v. We know that
this function is a contraction mapping, and therefore should converge to its
unique fixed point (v = T v) of -50 from any starting value. This is in fact
what we observe, with convergence to the fifth decimal point by iteration
70.
Columns 2 through 4 contain approximation sequences for the mapping
T v = −10 − 1.2v. While there is a unique fixed point for this map (equal
to -10/2.2), the map itself is not a contraction. We see that starting from
the point v0 = 0, when ζ = 1 (column 2) the algorithm diverges. This is
not the case in columns 3 and 4, where ζ was set to .2 and .8, respectively.
The best performance in this case was for ζ = .2, in this case convergence
was both rapid and “smooth.” Convergence was also obtained for ζ = .8,
but was noticeably slower. Of course, which ζ works best in any specific
problem will depend on the nature of the map and the starting value, and
cannot generally be determined except through trial and error.

6
Table A.2
Illustration of Method of Successive Approximation
(v0 = 0)

T v or Lv
Iteration −10 + .8v −10 − 1.2v ς(−10 − 1.2v) + (1 − ς)v
ς = .2 ς = .8

10 -44.63129 23.59880 -4.53167 -4.25323


20 -49.42354 169.71636 -4.54541 -4.52667
30 -49.93810 1074.4378 -4.54545 -4.54425
..
40 -49.99335 . -4.54538
50 -49.99929 -4.54545
60 -49.99992
70 -49.99999

You might also like