0% found this document useful (0 votes)
14 views28 pages

Lempa 2012

This document summarizes an article that studies optimal stopping problems where the decision maker can only stop on the jump times of an independent Poisson process, rather than at any time. It introduces two related optimal stopping problems - one where stopping is allowed immediately, and one where it is not. The main result characterizes the optimal stopping times for these problems under certain conditions on the underlying diffusion process and payoff function. The model is also discussed as a way to study optimal timing of irreversible investment decisions under an exogenous information constraint.

Uploaded by

fg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views28 pages

Lempa 2012

This document summarizes an article that studies optimal stopping problems where the decision maker can only stop on the jump times of an independent Poisson process, rather than at any time. It introduces two related optimal stopping problems - one where stopping is allowed immediately, and one where it is not. The main result characterizes the optimal stopping times for these problems under certain conditions on the underlying diffusion process and payoff function. The model is also discussed as a way to study optimal timing of irreversible investment decisions under an exogenous information constraint.

Uploaded by

fg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Appl Math Optim (2012) 66:147–173

DOI 10.1007/s00245-012-9166-0

Optimal Stopping with Information Constraint

Jukka Lempa

Published online: 23 March 2012


© Springer Science+Business Media, LLC 2012

Abstract We study the optimal stopping problem proposed by Dupuis and Wang
(Adv. Appl. Probab. 34:141–157, 2002). In this maximization problem of the ex-
pected present value of the exercise payoff, the underlying dynamics follow a linear
diffusion. The decision maker is not allowed to stop at any time she chooses but rather
on the jump times of an independent Poisson process. Dupuis and Wang (Adv. Appl.
Probab. 34:141–157, 2002), solve this problem in the case where the underlying is a
geometric Brownian motion and the payoff function is of American call option type.
In the current study, we propose a mild set of conditions (covering the setup of Dupuis
and Wang in Adv. Appl. Probab. 34:141–157, 2002) on both the underlying and the
payoff and build and use a Markovian apparatus based on the Bellman principle of
optimality to solve the problem under these conditions. We also discuss the interpre-
tation of this model as optimal timing of an irreversible investment decision under an
exogenous information constraint.

Keywords Optimal stopping · Irreversible investment · Linear diffusion · Poisson


process · Bellman principle of optimality · Resolvent operator

1 Introduction and the Main Result

1.1 The Underlying Dynamics

We assume that the underlying state process X is a regular linear diffusion defined
on a complete filtered probability space (, F , {Ft }t≥0 , P) satisfying the usual con-
ditions and evolving on R+ with the initial state x, see [5]. For brevity, we denote

J. Lempa ()
Centre of Mathematics for Applications, University of Oslo, PO Box 1053, Blindern, 0316 Oslo,
Norway
e-mail: [email protected]
148 Appl Math Optim (2012) 66:147–173

the filtration {Ft }t≥0 as F. In addition, we denote as Px the probability measure P


conditioned on the initial state x and as Ex the expectation with respect to Px . In
line with most economical and financial applications, we assume that X does not die
inside R+ , i.e., that killing of X is possible only at the boundaries 0 and ∞. There-
fore the boundaries 0 and ∞ are either natural, entrance, exit or regular. In the case
a boundary is regular, it is assumed to be killing, see [5], pp. 18–20, for a charac-
terization of the boundary behavior of diffusions. The life time of X is defined as
ζ := {t ≥ 0 : Xt ∈ / R+ }. Now, the evolution of X is completely determined by its
scale function S and speed measure m inside R+ , see [5], pp. 13–14. Furthermore,
we assume that the function S and the measure m are both absolutely continuous
with respect to the Lebesgue measure, have smooth derivatives and that S is twice
continuously differentiable. Under these assumptions, we know that the infinitesimal
d2
generator A : D(A) → Cb (R+ ) of X can be expressed as A = 12 σ 2 (x) dx 2 + μ(x) dx
d

where the functions σ and μ (the infinitesimal parameters of X) are related to S


and m via the formulæ m (x) = σ 22(x) eB(x) and S  (x) = e−B(x) for all x ∈ R+ ,
 x 2μ(y)
where B(x) := σ 2 (y)
dy, see [5], pp. 17. From these definitions we find that
S  (x)
σ 2 (x) = 2
S  (x)m (x) and μ(x) = − for all x ∈ R+ . In what follows, we as-
S  2 (x)m (x)
sume that the functions μ and σ 2 are continuous. The assumption that the state space
is R+ is done for reasons of notational convenience. In fact, we could assume that
the state space is any interval I in R and all our subsequent analysis would hold
with obvious modifications. Furthermore, we denote as, respectively, ψr and ϕr the
increasing and the decreasing solution of the ordinary second order linear differential
equation Au = ru, where r > 0, defined on the domain of the characteristic operator
of X—for a characterization and fundamental properties of the minimal r-excessive
functions ψr and ϕr , see [5], pp. 18–20. In addition, we assume that the filtration F
is rich enough to carry a Poisson process N = (Nt , Ft ) with intensity λ. We call the
process N the signal process, and assume that X and N are independent.
For r > 0, we denote as Lr1 the class of real valued measurable functions f on

R+ satisfying the condition Ex [ 0 e−rs |f (Xs )|ds] < ∞. For a function f ∈ Lr1 , the
resolvent Rr f : R+ → R is defined as
 ζ 
(Rr f )(x) = Ex e−rs f (Xs )ds , (1.1)
0

for all x ∈ R+ . The resolvent Rr and the increasing and decreasing solutions ψr and
ϕr are connected in a computationally very useful way. Indeed, we know from the
literature that for a given f ∈ Lr1 the resolvent Rr f can be expressed as
 x
(Rr f )(x) = Br−1 ϕr (x) ψr (y)f (y)m (y)dy
0
 ∞
+ Br−1 ψr (x) ϕr (y)f (y)m (y)dy, (1.2)
x
Appl Math Optim (2012) 66:147–173 149

ψ  (x) ϕ  (x)
for all x ∈ R+ , where Br = Sr (x) ϕr (x) − Sr (x) ψr (x) denotes the Wronskian deter-
minant, see [5], pp. 19. We remark that the value of Br does not depend on the state
variable x but depends on the rate r.

1.2 The Optimal Stopping Problems

Having the underlying dynamics set up, we formulate, following [10], our main opti-
mal stopping problems. In comparison to the classical continuous time case, see, e.g.,
[2, 8, 17, 23], see also [20], the key difference is that the decision maker is not allowed
to (or cannot) exercise at any time she chooses but rather on the jump times of the in-
dependent signal process N . The process N jumps at times T1 < T2 < · · · < Tn < · · · ,
where the intervals {T1 , T2 − T1 , T3 − T2 , . . . } are exponential IID with mean λ1 . We
remark that by convention T0 = 0 and T∞ = ∞.
In the first optimal stopping problem, the decision maker cannot exercise at the
initial time t = 0. This means that the first jump time T1 is the first potentially rea-
sonable moment for her to exercise. In this setting, the class of admissible stopping
times reads as
 
T = τ : for all ω ∈ , τ (ω) = Tn (ω) for some n ∈ 1, 2, . . . , ∞ . (1.3)

Let r > 0 be the constant discount rate and g : R+ → R the exercise payoff function
which is assumed to be at least continuous. The optimal stopping problem is now to
maximize the expected present value of the exercise payoff under {Fτ }τ ∈T , i.e. to
determine the optimal value function

V (x) = sup Ex e−rτ g(Xτ )1{τ <ζ } . (1.4)
τ ∈T

Moreover, we want to characterize the optimal stopping time τ ∗ which constitutes


this value.
The second optimal stopping problem is otherwise the same as the first but now the
decision maker can exercise immediately, i.e., at t = 0. Now, the class of admissible
stopping times reads as
 
T0 = τ : for all ω ∈ , τ (ω) = Tn (ω) for some n ∈ 0, 1, 2, . . . , ∞ . (1.5)

The corresponding optimal stopping problem reads as



V0 (x) = sup Ex e−rτ g(Xτ )1{τ <ζ } , (1.6)
τ ∈T0

and the optimal stopping time is denoted as τ0∗ . The reason for the simultaneous intro-
duction of these problems is mostly technical, as their analyzes will be intertwined.

1.3 Main Result and Discussion

In the literature of optimal stopping problems of the form (1.4), the incorporated
exogenous Poisson processes, or more general renewal processes, appear in various
150 Appl Math Optim (2012) 66:147–173

roles. In principle, this process can affect three different components of the problem,
namely the parameters of the underlying dynamics, the payoff structure, and/or the
set of admissible exercise times. An example of models where the underlying is af-
fected fall into the class of regime switching models, where the changes in the drift
and volatility are triggered exogenously, see, e.g., [12, 14, 16]. The payoff structure
is affected, for example, in a real option approach to the technology adaption of a
value maximizing firm, where new technologies emerge according to the jumps of
the exogenous innovation process, see, e.g., [3, 4, 6]. More precisely, the exogenous
innovation process affects the firms exit (or entrance) strategy as the adoption of new
technologies changes the expected present value of the cash flow accrued from the
production.
The setup of the study at hand serves as an example of a class of problems where
the set of admissible stopping times is affected by the exogenous signal process. This
class of problems was first proposed in [10], where the authors solve the special case
of perpetual American call with underlying geometric Brownian motion. The same
signal process setting was adopted in [13], where the authors generalize the results of
[24] for stopping geometric Brownian motion at its maximum. Generally speaking,
the process N can be seen as an exogenous constraint on the decision makers ability
to exercise. This constraint has different interpretations. In [10], the authors propose,
along the lines of [22], that the signal process N reflects liquidity effects, i.e., the pro-
cess N dictates the times at which the asset is available to trade. Following [13], we
remark that the considered optimal stopping problem can also be seen as a valuation
problem of a randomized version of a perpetual Bermudan option, where contract
allows the holder to exercise only at the jump times of the process N . The process
N can also be seen as an information constraint. Now, the holder is able exercise at
all times but can observe the return process only at the jump times of N . The holder
is forced to make her timing decision based on partial information on X where the
signal process N stipulates the exogenous restriction on the information available to
her. In this setting, the sample paths observed by the decision maker are pure jump
trajectories with jumps at Poissonian times Ti and remaining constant in between, see
Fig. 1.
Our objective is to prove a generalization of the main result in [10]. This general-
ization, which is new to our best knowledge, is formulated in the next theorems.

Theorem 1.1 Assume that the upper boundary ∞ is natural and the lower bound-
ary 0 is natural, entrance, exit or killing for the underlying X. Assume that the
payoff g is continuous and in Lr1 . Furthermore, assume that there is a unique
state x̂ which maximizes the function x → ψg(x)
r (x)
, that this function is nondecreas-
ing on (0, x̂) and nonincreasing on (x̂, ∞), and that it satisfies the limiting condi-
tions limx→0+ ψg(x)
r (x)
= limx→∞ ψg(x)
r (x)
= 0. Then the threshold x ∗ < x̂ characterized
uniquely by the condition
 



ψr x ∗ ϕr+λ (y)g(y)m (y)dy = g x ∗ ϕr+λ (y)ψr (y)m (y)dy
x∗ x∗

gives rise to the optimal stopping region [x ∗ , ∞) for the optimal stopping problems
(1.4) and (1.6). Moreover, the optimal value functions V ∈ C 2 (R+ ) and V0 ∈ C(R+ )
Appl Math Optim (2012) 66:147–173 151

Fig. 1 Picture of a possible


realization of the underlying
diffusion X (grey trajectory) and
the pure jump path determined
by the exponentially arriving
observations of X (black
trajectory). In this realization,
the high return around t = 0.6 is
not observed and therefore
missed by the investor

can be written as

⎨λ(Rr+λ g)(x) + g(x ∗ )−λ(Rr+λ g)(x ∗ )
ϕr+λ (x ∗ ) ϕr+λ (x), x ≥ x∗,
V (x) = λ(Rr+λ V0 )(x) =
⎩ g(x ∗ )
ψr (x ∗ ) ψr (x), x < x∗,
(1.7)
and

g(x), x ≥ x∗,
V0 (x) = g(x ∗ )
(1.8)
ψr (x ∗ ) ψr (x), x < x∗.

Theorem 1.2 Assume that the lower boundary 0 is natural and the upper bound-
ary ∞ is natural, entrance, exit or killing for the underlying X. Assume that the
payoff g is continuous and in Lr1 . Furthermore, assume that there is a unique
state x̃ which maximizes the function x → ϕg(x)
r (x)
, that this function is nondecreas-
ing on (0, x̃) and nonincreasing on (x̃, ∞), and that it satisfies the limiting condi-
tions limx→0+ ϕg(x)
r (x)
= limx→∞ ϕg(x)
r (x)
= 0. Then the threshold x † > x̃ characterized
uniquely by the condition

 x†  x†




ϕr x † ψr+λ (y)g(y)m (y)dy = g x † ψr+λ (y)ϕr (y)m (y)dy
0 0

gives rise to the optimal stopping region (0, x † ] for the optimal stopping problems
(1.4) and (1.6). Moreover, the optimal value functions V ∈ C 2 (R+ ) and V0 ∈ C(R+ )
152 Appl Math Optim (2012) 66:147–173

can be written as
⎧ †
⎨ g(x †) ϕr (x), x > x†,
ϕr (x )
V (x) = λ(Rr+λ V0 )(x) =
⎩λ(R g(x † )−λ(Rr+λ g)(x † )
r+λ g)(x) + ψr+λ (x † )
ψr+λ (x), x ≤ x†,
(1.9)
and
⎧ †
⎨ g(x ) ϕr (x), x > x†,
ϕr (x † )
V0 (x) = (1.10)
⎩g(x), x ≤ x†.

We make a few remarks on the assumptions of Theorems 1.1 and 1.2. It is interest-
ing to note that the existence of a unique optimal stopping threshold can be returned
essentially to the monotonicity properties of the function x → ψg(x) r (x)
(or x → ϕg(x)
r (x)
).
In comparison to [2], Theorem 3, we make additional assumptions on the limiting
behavior of these functions and on the integrability of the payoff g. However, these
additional assumptions are not very restrictive from the applications point of view. In
this sense, it is interesting to note that the restriction of the admissible stopping times
from the entire set of F-stopping times to random times with exponential arrivals does
not result into any severe additional restrictions on the underlying X and the payoff g.
As was mentioned earlier, the function ψr is an increasing solution of the ordinary
second order differential equation (A − r)ψr = 0 satisfying suitable boundary condi-
tions. Even though it is not possible solve this ODE explicitly except in special cases,
there are well developed methods for solving such equations numerically. This makes
the numerical verification of the monotonicity and limiting conditions of the function
x → ψg(x)
r (x)
plausible; the same applies naturally for Theorem 1.2 and the function
x → ϕg(x)
r (x)
.
The reminder of the paper is organized as follows. In Sect. 2, we carry out a proof
for Theorem 1.1 by first deriving the candidates for the solutions and then verifying
that these candidates are the actual solutions. We remark that the proof of Theo-
rem 1.2 has a completely analogous proof to the one of Theorem 1.1 and will there-
fore be omitted. In Sect. 2, we also study the asymptotics of the solutions with respect
to the parameter λ. In Sect. 3, we illustrate our results with four explicit examples in-
cluding the case of [10]. Section 4 concludes the study.

2 Proof of the Main Result

2.1 Some Preliminary Analysis

We start the preliminary analysis by proving some useful properties of harmonic


functions.

Lemma 2.1 Let f ∈ C(R+ ) and r > 0. If, in addition, there exists λ > 0 and an open
A ⊆ R+ such that λ(Rr+λ f )(x) = f (x) for all x ∈ A, then (A − r)f (x) = 0 for all
x ∈ A. On the contrary, if
Appl Math Optim (2012) 66:147–173 153

(a) f is r-harmonic and the boundaries 0 and ∞ are natural, then λ(Rr+λ f )(x) =
f (x),
(b) ∞ is natural and 0 is entrance, exit or killing, then λ(Rr+λ ψr )(x) = ψr (x) and
λ(Rr+λ ϕr )(x) = ϕr (x) − A1 ϕr+λ (x), where A1 = limz→0 ϕϕr+λr (z)
(z) > 0,
(c) 0 is natural and ∞ is entrance, exit or killing, then λ(Rr+λ ϕr )(x) = ϕr (x) and
λ(Rr+λ ψr )(x) = ψr (x) − A2 ψr+λ (x), where A2 = limz→∞ ψψr+λ r (z)
(z) > 0,
(d) f is r-harmonic and the boundaries 0 and ∞ are entrance, exit or killing, then
λ(Rr+λ f )(x) < f (x),
for all λ > 0 and x ∈ R+ .

Proof Assume that there exists λ > 0 and an open A ⊆ R+ such that λ(Rr+λ f )(x) =
f (x) for all x ∈ A. Now, using the representation (1.2) and the harmonicity properties
of ψr+λ and ϕr+λ , it is a matter of differentiation to show that
 x
−1
(A − r)(Rr+λ f )(x) = Br+λ (A − r)ϕr+λ (x) ψr+λ (y)f (y)m (y)dy
0
 ∞
−1
+ Br+λ (A − r)ψr+λ (x) ϕr+λ (y)f (y)m (y)dy − f (x)
x
 x
−1
= Br+λ λϕr+λ (x) ψr+λ (y)f (y)m (y)dy
0
 ∞
−1
+ Br+λ λψr+λ (x) ϕr+λ (y)f (y)m (y)dy − f (x)
x
= λ(Rr+λ f )(x) − f (x) = 0,

for all x ∈ A. Since f (x) = λ(Rr+λ f )(x) on A and A is open, the claim follows.
To prove the remaining claims, let f be r-harmonic. Then, in particular, f is
twice continuously differentiable because we have assumed that μ and σ are contin-
/ (n−1 , n)} for n ≥ 1. Now,
uous. Consider the Markov times Sn : n → inf{t ≥ 0 : Xt ∈
Dynkin’s formula, see [11], pp. 131–133, yields
 Sn ∧k 

Ex e−(r+λ)(Sn ∧k) f (XSn ∧k ) = f (x) + Ex e−(r+λ)s (A − r)f (Xs )ds
0
  
=0
 Sn ∧k 
−(r+λ)s
− λEx e f (Xs )ds , (2.1)
0

for all k ∈ N. Since e−(r+λ)(Sn ∧k) f (XSn ∧k ) ≤ supz∈[n−1 ,n] f (z) < ∞ for a fixed n
and f is non-negative, we can use bounded (monotone) convergence pass to the limit
k → ∞ on the left (right) hand side of (2.1) and obtain
 
 Sn
Ex e−(r+λ)Sn f (XSn ) = f (x) − λEx e −(r+λ)s
f (Xs )ds .
0
154 Appl Math Optim (2012) 66:147–173

Since Sn is non-decreasing and Sn → ζ as n → ∞, monotone convergence yields


 Sn 
−(r+λ)s
λEx e f (Xs )ds → λ(Rr+λ f )(x), n → ∞.
0

Let x ∈ (n−1 , n) for a given n ≥ 2. We know, see, e.g., [18], that



Ex e−(r+λ)Sn f (XSn )
 

= Ex e−(r+λ)τn 1{τn <τn−1 } f (n) + Ex e−(r+λ)τn−1 1{τn >τn−1 } f n−1

ϕr+λ (n)f (n−1 ) − ϕr+λ (n−1 )f (n)


= ψr+λ (x)
ϕr+λ (n)ψr+λ (n−1 ) − ϕr+λ (n−1 )ψr+λ (n)

ψr+λ (n−1 )f (n) − ψr+λ (n)f (n−1 )


+ ϕr+λ (x), (2.2)
ϕr+λ (n)ψr+λ (n−1 ) − ϕr+λ (n−1 )ψr+λ (n)

where the first hitting time τy = inf{t ≥ 0 : Xt = y}. To proceed, we prove the
claim (b)—claims (a), (c), and (d) are treated in the same manner. Consider first
the case f = ψr . We rewrite (2.2) as

ψr (n ) ϕr+λ (n) −1
 ψr (n) 1 − ψr (n) ϕr+λ (n−1 )
Ex e−(r+λ)Sn ψr (XSn ) = ψr+λ (x)
ψr+λ (n) 1 − ψr+λ (n−1 ) ϕr+λ (n)
ψr+λ (n) ϕr+λ (n−1 )
  
:=a1 (n)
−1
ψr+λ (n ) ψr (n)
ψr (n−1 ) 1 − ψr+λ (n) ψr (n−1 )
+ ϕr+λ (x). (2.3)
ϕr+λ (n−1 ) 1 − ψr+λ (n−1 ) ϕr+λ (n)
ψr+λ (n) ϕr+λ (n−1 )
  
:=a2 (n)

Since ψ· (n) → ∞ as n → ∞, the monotonicity properties of ψ· and ϕ· imply that


a1 (n) → 1 as n → ∞—see [5], pp. 18–20, for the limiting behavior of ψ· and ϕ· . On
the other hand, since

ψr (n−1 )   ψr+λ (n−1 )


= En−1 e−rτn ≥ En−1 e−(r+λ)τn = ,
ψr (n) ψr+λ (n)

we find using the assumed boundary behavior that lim supn→∞ a2 (n) ≤ 1. More-
over, we observe from this inequality that the function x → ψψr+λ
r (x)
(x) is decreasing.

Now, since ∞ is natural (implying that limn→∞ ψS· (n)
(n)
= ∞), we find by first using
l’Hôpital’s rule twice and then the identities (A − r)ψr = (A − (r + λ))ψr+λ = 0
coupled with the definition of S  that
Appl Math Optim (2012) 66:147–173 155

ψr  (n)
ψr (n) S  (n) S  (n)ψr  (n) − S  (n)ψr  (n)
lim = lim = lim
n→∞ ψr+λ (n) n→∞ ψr+λ  (n) n→∞ S  (n)ψr+λ  (n) − S  (n)ψr+λ  (n)
S  (n)
r ψr (n)
= lim . (2.4)
r + λ n→∞ ψr+λ (n)

This implies that the limiting value must be zero. Finally, the assumed boundary
−1
behavior implies that also ϕψr (n(n−1) ) and, consequently, Ex [e−(r+λ)Sn ψr (XSn )] → 0
r+λ
as n → ∞. This proves the claim on ψr .
Consider now the case f = ϕr . We rewrite (2.2) as

ϕr+λ (n) ϕr (n ) −1
 ϕr (n) 1 − ϕr+λ (n−1 ) ϕr (n)
Ex e−(r+λ)Sn ϕr (XSn ) = ψr+λ (x)
ψr+λ (n) 1 − ψr+λ (n−1 ) ϕr+λ (n)
ψr+λ (n) ϕr+λ (n−1 )
  
:=b1 (n)
−1
ψr+λ (n ) ϕr (n)
ϕr (n−1 ) 1 − ψr+λ (n) ϕr (n−1 )
+ ϕr+λ (x). (2.5)
ϕr+λ (n−1 ) 1 − ψr+λ (n−1 ) ϕr+λ (n)
ψr+λ (n) ϕr+λ (n−1 )
  
:=b2 (n)

Similarly to the previous case, we find that

lim sup b1 (n) ≤ 1 and lim b2 (n) = 1.


n→∞ n→∞

ϕr (n)
Since ψr+λ (n) → 0 as n → ∞, we conclude, analogously to (2.4), that

 ϕr (z)
lim Ex e−(r+λ)Sn ϕr (XSn ) = lim ϕr+λ (x) > 0, (2.6)
n→∞ z→0 ϕr+λ (z)

proving the claim on ϕr . 

To illustrate the conclusion of Lemma 2.1, consider first a regular diffusion process
d2
X with the differential generator A = 12 x 4 dx 2 and the initial state x > 0. This process
can be identified as the reciprocal of a Bessel(3) process (aka a CEV process, see,
e.g., [15]). The origin is natural and ∞ is an √ entrance boundary for X. √ Now, the
functions ψr and ϕr read as ψr (x) = x exp(− 2rx −1 ) and ϕ (x) = x sinh( 2rx −1 ).
√ r
Moreover, the Wronskian Br = 2r. Using (1.2), it is a matter of integration to show
that
  √ √ 
2(r + λ) − 2r
λ(Rr+λ ψr )(x) = ψr (x) 1 − exp −
x
= ψr (x) − ψr+λ (x) and
λ(Rr+λ ϕr )(x) = ϕr (x).
156 Appl Math Optim (2012) 66:147–173

In particular, we note that λ(Rr+λ ψr )(x) = 0 as λ → 0. As another example, let X


be a standard Brownian motion killed in the origin with the initial state x > 0. Now,
the boundary
√ ∞ is natural. In this
√ case, the functions ψr and √ ϕr read as ψr (x) =
sinh( 2rx) and ϕr (x) = exp(− 2rx). The Wronskian Br = 2r. Now, using (1.2)
it is straightforward to compute that

λ(Rr+λ ψr )(x) = ψr (x) and λ(Rr+λ ϕr )(x) = ϕr (x) − ϕr+λ (x).

This time we find that λ(Rr+λ ϕr )(x) = 0 as λ → 0.


The assumptions of Theorem 1.1 restraining the choice of the payoff function g
and the underlying X are relatively weak and easy to verify, at least numerically.
We know from [2] that the ratio function x → ψg(x)
r (x)
and its monotonicity properties
play a key role in the classical continuous time case. In the current setting, it not the
ratio x → ψg(x)
r (x)
but something at least formally quite similar that characterizes the
optimal stopping rule. To make a precise statement, define the functions I : R+ → R
and J : R+ → R as
 ∞
I (x) = ϕr+λ (y)g(y)m (y)dy,
x
 ∞
(2.7)

J (x) = ϕr+λ (y)ψr (y)m (y)dy,
x

for all x ∈ R+ . We remark that it follows from the proof of Lemma 2.1 that the
function J is well-defined. The ratio function x → JI (x)
(x) will play a key role when
proving Theorem 1.1. The next lemma provides us with the required monotonicity
properties of this function.

Lemma 2.2 Let the assumptions of Theorem 1.1 hold. Then there is a unique state
x ∗ < x̂ that maximizes the function x → JI (x) I (x)
(x) . Moreover, the function x → J (x) is

nondecreasing on (0, x ) and nonincreasing on (0, x ).∗

Proof First, straightforward differentiation yields the condition


 
d I (x)
 0 if and only if ψr (x)I (x)  g(x)J (x). (2.8)
dx J (x)
g(x)
Let x ≥ x̂. Since the function x → ψr (x) is nonincreasing on (x̂, ∞), we find that
 ∞ g(y)
ψr (x)I (x) − g(x)J (x) = ψr (x) ψr (y)m (y)dy − g(x)J (x)
ϕr+λ (y)
x ψ r (y)
 
g(x)
< ψr (x) − g(x) J (x) = 0.
ψr (x)
g(x)
Furthermore, since the function x → ψr (x) tends to 0 as x → ∞, we conclude using
the condition (2.8) that the function x → JI (x)
(x) is nonincreasing on (x̂, ∞) and tends
Appl Math Optim (2012) 66:147–173 157

g(x) I (x̂)
to 0 as x → ∞. On the other hand, since limx→0+ ψr (x) = 0 and J (x̂) > 0, we find
using the condition (2.8) that the function x → JI (x)(x) must have at least one interior
∗ g(x ∗ ) I (x ∗ )
maximum x < x̂. Finally, since ψr (x ∗ ) = J (x ∗ ) , x → JI (x)
(x) is continuously differen-
g(x)
tiable, and x → ψr (x) nondecreasing on (0, x̂), we conclude, again using (2.8), that
the maximum x ∗ is unique. 

In Lemma 2.2 we proved that the function x → JI (x)


(x) has a unique global maxi-
∗ ∗
mum x . We remark that x is the unique state satisfying the condition
 



ψr x ∗ ϕr+λ (y)g(y)m (y)dy = g x ∗ ϕr+λ (y)ψr (y)m (y)dy. (2.9)
x∗ x∗

2.2 Necessary Conditions

We start the analysis of the optimal stopping problems (1.4) and (1.6) by deriving
necessary conditions for the existence of a unique optimal solution. As a result, we
find unique candidates for the optimal values V and V0 and the associated optimal
stopping rules. We derive the candidates using two different approaches.

2.2.1 Via the Resolvent Semigroup

In this subsection we derive the candidates for optimal characteristics with a direct
application of Bellman principle of optimality. We use the variational inequality for-
mulation of Bellman principle, see, e.g., [19]. Furthermore, we exploit the close con-
nection of the resolvent semigroup and exponentially distributed random times. De-
note as G and G0 the candidates for the optimal values of the problems (1.4) and
(1.6), respectively. Given the time homogeneity of the underlying X and the constant
jump rate of the signal process N , we make the ansatz that the optimal continuation
region is an interval (0, y ∗ ) in both problems. The associated candidates for the opti-
mal stopping times are the first exit times TNy ∗ , where Ny ∗ = inf{n ≥ 1 : XTn ≥ y ∗ },
in (1.4) and TN 0∗ , where Ny0∗ = inf{n ≥ 0 : XTn ≥ y ∗ }, in (1.6). In the problem (1.6),
y
the decision maker chooses between two actions at every jump time Ti , i = 0, 1, . . . :
she either exercises or waits. If she chooses to exercise, she gets the payoff g(x). On
the other hand, if she waits, the expected discounted value accrued from this choice
is determined by the expectation Ex [e−rU G0 (XU )] = λ(Rr+λ G0 )(x), where U is an
independent, exponentially distributed random time with mean λ1 . Given these argu-
ments, we assume that the candidate G0 satisfies the variational inequality
 
G0 (x) = max g(x), λ(Rr+λ G0 )(x) , (2.10)

for all x ∈ R+ , see also [10], Remark 3, p. 144. To analyze (2.10), we remark that
by assumption the candidate G0 coincides with the payoff g on the exercise region
[y ∗ , ∞) and satisfies the condition G0 (x) = λ(Rr+λ G0 )(x) on the continuation re-
gion (0, y ∗ ). Using Lemma 2.1 we find that G0 (x) = c1 ψr (x) + c2 ϕr (x) for all
x ∈ (0, y ∗ ). Since we are looking for a solution that is bounded in the origin, we
find that c2 =∗ 0. Moreover, since the value function is continuous, we conclude that
G0 (x) = ψg(y ) ∗
∗ ψr (x) for all x ∈ (0, y ).
r (y )
158 Appl Math Optim (2012) 66:147–173

Next we characterize the optimal exercise threshold y ∗ such that the variational
inequality (2.10) is satisfied. To this end, we find using Lemma 2.1 and the represen-
tation (1.2) that
g(y ∗ )
G0 (x) = ψr (x)
ψr (y ∗ )
 
g(y ∗ )
= λ Rr+λ ψ r (x)
ψr (y ∗ )
= λ(Rr+λ G0 )(x)
  ∞   
λ g(y ∗ ) 
+ ψr+λ (x) ϕr+λ (z) ψr (z) − g(z) m (z)dz ,
Br+λ y∗ ψr (y ∗ )
(2.11)

for all x < y ∗ . By comparing the expression (2.11) to Lemma 2.2 and the expression
(2.9), we readily find that in (2.11) the last integral term vanishes and, consequently,
the balance condition in (2.10) is satisfied if and only if y ∗ = x ∗ , where x ∗ is defined
in (2.9). Now, the candidate G0 can be expressed as

⎨g(x), x ≥ x∗,
G0 (x) = (2.12)
⎩ g(x ∗∗) ψr (x), x < x ∗ .
ψr (x )

We turn to the determination of the candidate G. In the problem (1.4) immediate


exercise is not allowed, so the decision maker must first wait an exponentially dis-
tributed period with mean λ1 to make any action. After that she will face the same
choice as in the problem (1.6), i.e., the choice of either exercising or postponing the
exercise for another exponentially distributed random period. This argument gives
rise to the balance condition

G(x) = λ(Rr+λ G0 )(x), (2.13)

for all x ∈ R+ , see also [10], Remark 3, p. 144. Assume that x ∗ gives rise to the
optimal exercise rule also in the problem (1.4). Then we find using the conditions
(2.11) and (2.13) that

⎨λ(Rr+λ G0 )(x), x ≥ x ∗ ,
G(x) =
⎩ g(x ∗∗) ψr (x), x < x∗.
ψr (x )

Let x ≥ x ∗ . Then using Lemma 2.1 and representation (1.2), we find that

G(x) = λ(Rr+λ g)(x)


 x ∗   
λ g(x ∗ )  ϕr+λ (x)
+ ψr+λ (z) ∗
ψr (z) − g(z) m (z)dz
Br+λ 0 ψr (x ) ϕr+λ (x ∗ )

g(x ∗ ) − λ(Rr+λ g)(x ∗ )


= λ(Rr+λ g)(x) + ϕr+λ (x),
ϕr+λ (x ∗ )
Appl Math Optim (2012) 66:147–173 159

and, consequently, that the candidate G can be written as



⎨λ(Rr+λ g)(x) + g(x ∗ )−λ(Rr+λ g)(x ∗ )
ϕr+λ (x ∗ ) ϕr+λ (x), x ≥ x∗,
G(x) = (2.14)
⎩ g(x ∗ )
ψr (x ∗ ) ψr (x), x < x∗.

We have now derived unique candidates (G, x ∗ ) given by (2.14) and (2.9), and
(G0 , x ∗ ) given by (2.12) and (2.9) for the optimal characteristics of the problems
(1.4) and (1.6), respectively, under the assumptions of Theorem 1.1. Since x ∗ < x̂ =
argmax{ ψgr }, we conclude that the candidate G0 is only continuous over the bound-
ary x ∗ , cf. [2]. On the other hand, since the functions μ and σ are assumed to be
continuous and G0 is continuous, we conclude using Lemma 2.1 that the candidate
G is twice continuously differentiable.

2.2.2 Via a Free Boundary Problem

In the previous subsection we derived the candidates (G, x ∗ ) and (G0 , x ∗ ) for the
optimal characteristics of the problems (1.4) and (1.6) using the resolvent operator.
These candidates can also be derived using the free boundary approach of [10]. To do
this, we investigate the problem (1.4) and, similarly to Sect. 2.2.1, make the ansatz
that the optimal exercise rule is a one-sided threshold rule constituted by the first
exit time from the continuation region (0, y ∗ ). According to the Bellman principle,
we expect the candidate G to be r-harmonic in (0, y ∗ ). On the other hand, on the
exercise region [y ∗ , ∞) the decision maker cannot exercise unless the signal process
N jumps. In an infinitesimal time interval dt, the signal process N has probability λdt
of making a jump. This means that in time dt, the jump and, consequently, exercise
with payoff g(x) has probability λdt. On the other hand, the absence of jump forces
the decision maker to wait with probability 1 − λdt. Formally, this suggests with a
heuristic use of Dynkin’s formula, see, e.g., [11], p. 133, that

G(x) = g(x)λdt + (1 − λdt)Ex e−rdt G(Xdt )

= λg(x)dt + (1 − λdt) G(x) + (A − r)G(x)dt


= G(x) + (A − r)G(x)dt + λ g(x) − G(x) dt,

for all x > y ∗ under the intuition dt 2 = 0. Finally, this yields the condition


A − (r + λ) G(x) = −λg(x), (2.15)

for all x > y ∗ . Moreover, we can expect that g(x) < G(x) on (0, y ∗ ) and due to the
possibility that N doesn’t jump when x ≥ y ∗ that G(x) < g(x) on (y ∗ , ∞). To com-
plete the free boundary problem, we must pose a boundary condition at y ∗ . Following
[10], we require the smooth pasting principle to hold, i.e., the candidate G to be con-
tinuously differentiable over the boundary y ∗ . Under this condition it is elementary
to check that G(y ∗ ) = g(y ∗ ). Now we are in position to pose the free boundary prob-
160 Appl Math Optim (2012) 66:147–173

lem: Determine the unique solution (G, y ∗ ) for the problem




⎪ G(0+) ≥ 0,



⎨G(y ∗ ) = g(y ∗ ),
(2.16)

⎪(A − r)G(x) = 0, and G(x) > g(x), x < y ∗ ,




(A − (r + λ))G(x) = −λg(x) and G(x) < g(x), x > y∗.

Assume now that a unique solution (G, y ∗ ) exists and that x < y ∗ . The condition
(A − r)G(x) = 0 implies that G can be expressed as G(x) = c1 ψr (x) + c2 ϕr (x),
where ci ≥ 0. Since we are looking for a solution that is bounded in the origin, we
find that c2 = 0. Now, let x ≥ y ∗ . A particular solution to the fourth condition of
the free boundary problem (2.16) is the resolvent λ(Rr+λ g) and, consequently, the
general solution can be written as G(x) = λ(Rr+λ g)(x) + d1 ψr+λ (x) + d2 ϕr+λ (x).
We observe that the assumptions of Theorem 1.1 imply that d1 = 0. Now, the second
condition in (2.16) implies that g(y ∗ ) = c1 ψr (y ∗ ) = λ(Rr+λ g)(y ∗ ) + d2 ϕr+λ (y ∗ ).
This in turn implies that
g(y ∗ ) g(y ∗ ) − λ(Rr+λ g)(y ∗ )
c1 = , d2 = ,
ψr (y ∗ ) ϕr+λ (y ∗ )
and, consequently, that the candidate G can be expressed as
⎧ ∗ ∗
⎨λ(Rr+λ g)(x) + g(y )−λ(Rr+λ∗ g)(y ) ϕr+λ (x), x ≥ y∗,
ϕr+λ (y )
G(x) = (2.17)
⎩ g(y ∗∗) ψr (x), x < y∗.
ψr (y )

To identify the candidate for the optimal stopping threshold, we use the smooth past-
ing principle. Indeed, since the candidate G is assumed to be continuously differen-
tiable over the boundary y ∗ , we observe that the condition

ψ  (y ∗ )
g(y ∗ ) − λ(Rr+λ g)(y ∗ ) 

g y ∗ r ∗ − λ(Rr+λ g) y ∗ − ϕr+λ y = 0 (2.18)
ψr (y ) ϕr+λ (y ∗ )
must be satisfied. This can be rewritten as
   

∗ ψr (y ∗ ) ϕr+λ (y ∗ )
ϕr+λ
 ∗ (y ∗ )

g y ∗
− ∗
= λ(R r+λ g) y − ∗
λ(Rr+λ g) y ∗ .
ψr (y ) ϕr+λ (y ) ϕr+λ (y )
By invoking the representation (1.2) and straightforward differentiation, we find that
the right hand side can be expressed as
 ∞

ϕ  (y ∗ )
S  (y ∗ )
λ(Rr+λ g) y ∗ − r+λ ∗ λ(Rr+λ g) y ∗ = λ ϕr+λ (y)g(y)m (y)dy.
ϕr+λ (y ) ϕr+λ (y ∗ ) y ∗

Consequently, the optimality condition (2.18) can be rewritten as


   


ψr (y ∗ )
∗ ϕr+λ (y ∗ )

λψr y ∗ ϕr+λ (y)g(y)m (y)dy = g y ∗ ϕr+λ y − ψ r y .
y∗ S  (y ∗ ) S  (y ∗ )
Appl Math Optim (2012) 66:147–173 161

ψ  (x) ϕ (x)
Denote as w(x) = Sr (x) ϕr+λ (x) − r+λ S  (x) ψr (x). It is a straightforward appli-
cation of the harmonicity properties of ψr and ϕr+λ to establish that w  (x) =
−λϕr+λ (x)ψr (x)m (x) for all x ∈ R+ . Now, Fundamental Theorem of Calculus im-
plies that
 ∞


w y∗ = λ ϕr+λ (y)ψr (y)m (y)dy,
y∗

and, consequently, that the optimality condition (2.18) can be expressed as


 

∗ ∞ 

∗ ∞
ψr y ϕr+λ (y)g(y)m (y)dy = g y ϕr+λ (y)ψr (y)m (y)dy. (2.19)
y∗ y∗

Under the assumptions of Theorem 1.1, we know from Lemma 2.2 that this equation
has a unique solution denoted as x ∗ . By combining the expressions (2.17) and (2.19),
we have the same candidate for the value of the problem (1.4) and, consequently, of
the problem (1.6) as we did in Sect. 2.2.1. However, we had to make a priori assump-
tions on the differentiability of the candidate G over the optimal stopping boundary
in setting up and solving the free boundary problem (2.16). This is in contrast to
Sect. 2.2.1, where we formulated the variational inequalities (2.10) in terms of the
resolvent operator and used its properties to identify the boundary x ∗ and compute
the candidates G0 and G directly without such assumptions. It is also interesting to
note how different approaches suit better for different problems. Indeed, we saw how
the derivation of the candidate G0 is natural using the resolvent semigroup whereas
the free boundary approach is tailor made for the derivation of the candidate G.

2.3 The Verification Phase

In the previous subsections we derived the candidates (G, x ∗ ) and (G0 , x ∗ ) for the
solutions of the problems (1.4) and (1.6), respectively. From the point of view of the
verification, the continuous time formulations (1.4) and (1.6) are not that handy. In
order to remedy this, define the filtration G = {Gn }n≥0 as Gn := FTn for all n ≥ 0,
where Ti is the ith jump time of the signal process N , and the G-adapted process Z
as Zn := (Tn , XTn ). Moreover, define the sets N and N0 as

N = {N ≥ 1 : N is a G-stopping time},
N0 = {N ≥ 0 : N is a G-stopping time}.

Then Lemma 1 of [10] implies that the optimal stopping problems (1.4) and (1.6) can
be formulated alternatively as

V (x) = sup E g̃(ZN )|Z0 = (0, x) ,
N ∈N
 (2.20)
V0 (x) = sup E g̃(ZN )|Z0 = (0, x) ,
N ∈ N0

for all x ∈ R+ where g̃(Zn ) := e−rTn g(XTn ). Formulations (2.20) allow a straight-
forward usage of martingale techniques in the verification phase, as we will shortly
162 Appl Math Optim (2012) 66:147–173

see. We recall that the candidates G and G0 are connected via the condition G(x) =
λ(Rr+λ G0 )(x) for all x ∈ R+ . Using this, we are in position to prove the following.

Lemma 2.3 Let the assumptions of Theorem 1.1 hold. Then the process


S := e−rTn G0 (XTn ); Gn n≥0

is a non-negative uniformly integrable supermartingale for all initial states X0 =


x ∈ R+ .

Proof Let U be an exponentially distributed random time with mean λ1 and inde-
pendent of X. Then G0 (x) ≥ G(x) = λ(Rr+λ G0 )(x) = Ex [e−rU G0 (XU )] for all
x ∈ R+ . Thus the process S is a non-negative supermartingale. In order to prove uni-
form integrability, it suffices to show that supn Ex [Sn ] < ∞ and supn Ex [Sn 1A ] → 0
as P(A) → 0; then uniform integrability follows from [25], p. 190, Lemma 2.
Let x ∈ R+ . Define the process L := (e−rTn ψr r (x)
ψ (XTn )
; Gn )n≥0 . First, we find using
Lemma 2.1 that Ex [L1 ] = λ(Rr+λ ψr )(x)
ψr (x) = 1. Thus, the strong Markov property of
the underlying X implies that L satisfies Ex [Ln ] = 1 for all n ≥ 0. Now, define the
measure P∗x on (, F ) as
P∗x (A) = Ex [Ln 1A ],
see [5], p. 34. Let A ∈ F and n ≥ 0. By substituting G0 into S, we find that
 
Ex [Sn 1A ] G0 (XTn )
= Ex 1 A Ln
ψr (x) ψr (XTn )
   
g(x ∗ ) g(XTn )
= Ex 1 A 1 {X <x ∗ } Ln + Ex 1 A 1 {X ≥x ∗ } Ln . (2.21)
ψr (x ∗ ) Tn
ψr (XTn ) Tn

g(x)
Since x̂ is the global maximum of the function x → ψr (x) , expression (2.21) yields

Ex [Sn 1A ] g(x̂)  
0≤ ≤ Ex [1A 1{XTn <x ∗ } Ln ] + Ex [1A 1{XTn ≥x ∗ } Ln ]
ψr (x) ψr (x̂)
g(x̂) ∗
= P (A). (2.22)
ψr (x̂) x

First, let A =  in the inequality (2.22). Since Ex [Sn ] ≤ ψg((x̂)x̂) ψr (x), we find that
r
supn Ex [Sn ] < ∞. On the other hand, it is evident from the definition of P∗x that
P∗x (A) → 0 whenever Px (A) → 0. Thus, we conclude using the inequality (2.22)
that Ex [Sn 1A ] → 0 and, consequently, that supn Ex [Sn 1A ] → 0 as Px (A) → 0. 

In Lemma 2.3 we showed that under the assumptions of Theorem 1.1 the process
n → e−rTn G0 (XTn ) is not only a non-negative G-supermartingale but also uniformly
integrable. Uniform integrability will be needed in the proof of the next lemma, where
we use optional stopping with a stopping time that is not almost surely bounded.
Appl Math Optim (2012) 66:147–173 163

Lemma 2.4 Let the assumptions of Theorem 1.1 hold. Let τ0∗ = TN 0∗ where Nx0∗ =
x
inf{n ≥ 0 : XTn ≥ x ∗ }. Then
 ∗
G0 (x) = Ex e−rτ0 g(Xτ0∗ ) = V0 (x),

for all x ∈ R+ .

Proof Coupled with Lemma 2.3, the optional sampling theorem implies that G0 (x) ≥
Ex [e−rTN G0 (TN )] ≥ Ex [e−rTN g(TN )] for all G-stopping times N . Hence, G0 (x) ≥
V0 (x) for all x ∈ R+ . To prove that this inequality holds as an equality, i.e., that the
function G0 can be attained by the admissible stopping rule “stop at time τ0∗ ”, it
suffices to show that the stopped process

−rT 0
Q = e Nx ∗ ∧n G0 (XT ); Gn
N 0∗ ∧n n≥0
x

is a martingale. We recall the definition of the process S from Lemma 2.3. Now for
each n ≥ 1, we find that


n−1
Ex [Qn |Gn−1 ] = Ex [Sn 1{N 0∗ ≥n} |Gn−1 ] + Si 1{N 0∗ =i} . (2.23)
x x
i=0

Denote as U an independent exponentially distributed random time with mean λ1 .


Using the strong Markov property and the property G(x) = λ(Rr+λ G0 )(x), we find
that the first term on the right hand side of (2.23) can be written as

Ex [Sn 1{N 0∗ ≥n} |Gn−1 ] = e−rTn−1 EXTn−1 e−rU G0 (XU ) 1{N 0∗ ≥n}
x x

−rTn−1
=e G(XTn−1 )1{N 0∗ ≥n} . (2.24)
x

Now, since G0 (x) = G(x) when x ≤ x ∗ , the expressions (2.23) and (2.24) imply that


n−1
Ex [Qn |Gn−1 ] = Sn−1 1{N 0∗ ≥n} + Si 1{N 0∗ =i} = Qn−1 .
x x
i=0

Finally, since Q is also uniformly integrable, the result follows by optional sampling,
i.e.,
 ∗  ∗
G0 (x) = Ex [QN 0∗ ] = Ex e−rτ0 G0 (Xτ0∗ ) = Ex e−rτ0 g(Xτ0∗ ) ,
x

for all x ∈ R+ . 

We proved in Lemma 2.4 that our candidates G0 and TN 0∗ are the optimal charac-
x
teristics of the problem (1.6). We turn now back to the problem (1.4) and use Lem-
mas 2.3–2.4 to prove that the candidates G and TNx ∗ are the optimal characteristics
of the problem (1.4).
164 Appl Math Optim (2012) 66:147–173

Lemma 2.5 Let the assumptions of Theorem 1.1 hold. Let τ ∗ = TNx ∗ where Nx ∗ =
inf{n ≥ 0 : XTn ≥ x ∗ }. Then
 ∗
G(x) = Ex e−rτ g(Xτ ∗ ) = V (x),

for all x ∈ R+ .

Proof Since the process S from Lemma 2.3 is a non-negative supermartingale, we


find that
  
Ex e−rTn g(XTN ) ≤ Ex e−rTn G0 (XTN ) ≤ Ex e−rT1 G0 (XT1 )
= λ(Rr+λ G0 )(x) = G(x),

for all G-stopping times N ≥ 1 and x ∈ R+ . By taking the supremum over all
such N , we obtain the inequality V (x) ≤ G(x) for all x ∈ R+ . To prove that this
inequality hold as an equality, it suffices to show that the value G is attained by
the admissible stopping rule “stop at time τ ∗ ”. By conditioning on the first jump
time T1 , we find by using the strong Markov property, Lemma 2.4, and the condition
G(x) = λ(Rr+λ G0 )(x) that
 ∞
 −rτ ∗  ∗
Ex e g(Xτ ) = Ex
∗ e−rt EXt e−rτ0 g(Xτ0∗ ) λe−λt dt = G(x),
0

for all x ∈ R+ . 

2.4 A Note on the Asymptotics

We study the asymptotics of the optimal characteristics x ∗ , V and V0 as λ → 0 and


λ → ∞. To this end, we remark that the thresholds x̂ and x̃ defined in Theorems 1.1
and 1.2 are the optimal exercise thresholds for the classical continuous time stopping
problems corresponding to Theorems 1.1 and 1.2 and, given that the payoff g is
sufficiently smooth, satisfy (uniquely) the smooth pasting conditions g(x̂)ψr (x̂) =
ψr (x̂)g  (x̂) and g(x̃)ϕr (x̃) = ϕr (x̂)g  (x̃), cf. [2]. Moreover, the value functions V̂
and Ṽ corresponding to x̂ and x̃ read as
⎧  g(x̃)
⎨g(x), x ≥ x̂, ϕ (x̃) ϕr (x), x > x̃,
V̂ (x) = g(x̂) Ṽ (x) = r (2.25)
⎩ ψ (x), x < x̂, g(x), x ≤ x̃
ψr (x̂) r

cf. [2]. Using this notation, we prove the following result.

Proposition 2.6 Let x ∗ , V and V0 be given by Theorem 1.1. Then


(1) x ∗ is an increasing function of λ,
(2) x ∗ → x̂, V (x) → V̂ (x) and V0 (x) → V̂ (x) as λ → ∞,
(3) V (x) = 0 and V0 (x) = g(x) when λ = 0,
for all x ∈ R+ .
Appl Math Optim (2012) 66:147–173 165

Proof First, we notice that on the limit λ = 0 the signal process jumps only at T0 = 0
and T∞ = ∞ implying that V (x) = 0 and V0 (x) = g(x) for all x ∈ R+ . Now, let
x ≥ x̂. Since diffusions are Feller processes, we have that λ(Rr+λ g) → g as λ → ∞
in sup-norm, see [21], pp. 235. By coupling this with the representation




V (x) = λ(Rr+λ g)(x) + g x ∗ − λ(Rr+λ g) x ∗ Ex e−(r+λ)τx ∗

(see (1.7)), we deduce that V (x) → g(x)− as λ → ∞. Monotonicity of this con-


vergence and continuity of V across the boundary x ∗ imply that x ∗ increases as
λ increases and, consequently, that x ∗ → x̂ as λ → ∞. Finally, we conclude that
V (x) → V̂ (x) and V0 (x) → V̂ (x) for all x ∈ R+ as λ → ∞. 

The following proposition can be proved completely analogously under the as-
sumptions of Theorem 1.2.

Proposition 2.7 Let x † , V and V0 be given by Theorem 1.2. Then


(1) x † is a decreasing function of λ,
(2) x † → x̃, V (x) → Ṽ (x) and V0 (x) → Ṽ (x) as λ → ∞,
(3) V (x) = 0 and V0 (x) = g(x) when λ = 0,
for all x ∈ R+ .

The results of Propositions 2.6 and 2.7 are intuitively plausible. In fact, Proposi-
tion 2.6 shows unambiguously that the optimal exercise threshold of full information
case dominates the optimal exercise threshold under constrained information. This is
a reasonable result and reflects the phenomenon that the decision maker will settle for
less return when facing uncertainty on the length of the waiting time before the next
information update. Moreover, due to the partial information on the underlying X,
profitable moments can be missed and therefore decision maker has an incentive to
lower her return requirement. Proposition 2.6 shows also that increased information
on the underlying X (in the sense of increased λ) postpones the exercise in the sense
that the optimal exercise threshold increases. This again makes sense, since increased
λ results into shorter expected gaps between the observations. This means that it is
less likely for decision maker to miss a profitable moment and therefore she has an
incentive to increase her return requirement. To close the section, we remark that an
analogous interpretation holds also for Proposition 2.7.

3 Illustrations

3.1 Geometric Brownian Motion and Perpetual American Call

In this subsection we consider the problem studied in [10], namely the perpetual
American call option with underlying geometric Brownian motion. Let X be a regular
linear diffusion with the infinitesimal generator

1 d2 d
A = σ 2 x 2 2 + μx ,
2 dx dx
166 Appl Math Optim (2012) 66:147–173

− 2μ2
where μ ∈ R and σ > 0. The scale density S  reads as S  (x) = x σ and the speed

density m reads as m (x) = 2
.
(σ x)2
x σ2
The optimal stopping problem is now written as

V (x) = sup Ex e−rτ (Xτ − K)+ 1{τ <ζ } , (3.1)
τ

where r > 0 is the constant discount factor and K is an exogenously given constant.
For the sake of finiteness, we assume that μ < r and μ − 12 σ 2 > 0. This guarantees
that the optimal exercise thresholds are finite and are attained almost surely in a finite
time. It is well known that the increasing and decreasing solutions ψ· and ϕ· can be
expressed as
 
ψr (x) = x b , ψr+λ (x) = x β ,
ϕr (x) = x ,
a ϕr+λ (x) = x α ,

where the constants


⎧ 
⎨b = ( 1 − μ
) + ( 12 − μ 2
) + 2r
> 1,
2 σ2  σ2 σ2
⎩a = ( 1 − μ
) − ( 12 − μ 2
) + 2r
< 0,
2 σ2 σ2 σ2
⎧ 
⎨β = ( 1 − μ
) + ( 12 − μ 2
) + 2(r+λ) > 1,
2 σ2  σ2 σ2
⎩α = ( 1 − μ
) − ( 12 − μ 2 2(r+λ)
) + σ2 < 0.
2 σ2 σ2

It is a simple computation to show that the Wronskian Br+λ = 2 ( 12 − σμ2 )2 + 2(r+λ)
σ2
.
Since the payoff g(x) = (x − K)+ = 0 when x ≤ K, we find after straightforward
integration that the resolvent λ(Rr+λ g) can be written as


⎨ r+λ−μ
λ
x− λ
− 2λK 1−α
xα ,
r+λ K σ 2 Br+λ α(1−α)
x > K,
λ(Rr+λ g)(x) = (3.2)

⎩ 2λK 1−β
xβ , x ≤ K.
σ 2 Br+λ (β−1)β

We use now Theorem 1.1 to determine the optimal exercise threshold x ∗ and the
optimal value functions V and V0 . First, elementary integration yields

2
J (x) = x −κ ,
σ 2κ
 
μ 2 2(r+λ) μ 2
for all x ∈ R+ , where κ = ( 12 − σ2
) + σ2
− ( 12 − σ2
) + 2r
σ2
> 0. Similarly
we find that

⎨ 2 −β x
σ2
x ( β−1 − K
β ), x > K,
I (x) =
⎩ 2K −(β−1) , x < K.
σ 2 β(β−1)
Appl Math Optim (2012) 66:147–173 167

−b
Let x > K. It is an elementary computation to see that JI (x)
(x) = β(β−1) (βx − K(β − 1))
κx

and, consequently, that


 
d I (x) b(β − 1)
 0, when x  x ∗ := K.
dx J (x) β(b − 1)
We remark that it is a straightforward computation to verify that

b(β − 1) b − r+λ
r
a
= ;
β(b − 1) b − (r−μ)a−λ
r+λ−μ

see [10], p. 147, expression (15). Finally, using the expressions (3.2) and (1.7) we
obtain the representation


⎪ r−μ
⎨ λ x − λ K + r+λ−μ x −∗r+λ K ϕr+λ (x), x ≥ x ∗ ,
r

r+λ−μ r+λ ϕr+λ (x )


V (x) =

⎩ x ∗ −K∗ ψr (x),
ψr (x ) x < x∗,

for the optimal value V ; see [10], pp. 146–147, expressions (13), (14) and (16). Thus
we have derived the results on x ∗ and V by Dupuis and Wang from ours. A straight-
forward differentiation yields

   
dx ∗ 1 dβ 1 μ 2 2(r + λ) −1
= x̂ 2 = x̂ β σ2 2
− + > 0,
dλ β dλ 2 σ2 σ2

this observation is in line with Part (1) of Proposition 2.6. Moreover, since β → ∞
as λ → ∞, we see immediately from the representation of x ∗ that x ∗ → x̂ := b−1 bK

as λ → ∞. Finally, since ϕr+λ (x)


ϕr+λ (x ∗ ) < 1 whenever x > x ∗ , we find after elementary
manipulations that
r−μ ∗ − r
λ λ r+λ−μ x r+λ K
x− K+ ϕr+λ (x) → x − K,
r +λ−μ r +λ ϕr+λ (x ∗ )
for all x > x ∗ and, consequently, that both V (x) and V0 (x) tend to

⎨x − K, x ≥ x̂,
V̂ (x) =
⎩ x̂−K ψr (x), x < x̂,
ψ (x̂)r

as λ → ∞.
To end the subsection, we illustrate graphically in Fig. 2 the value functions V , V0
and V̂ under the parameter configuration μ = 0.01, r = 0.05, σ 2 = 0.1, λ = 0.1 and
K = 1.2.

3.2 Brownian Motion Killed in Origin and Perpetual American Call

As an example with non-singular boundary behavior, let X be a standard Brown-


ian motion killed in the origin with the initial state x > 0. Now, the boundary ∞
168 Appl Math Optim (2012) 66:147–173

Fig. 2 The value functions V̂


under the complete information
(black dashed curve) and V
under the information rate
λ = 0.1 (black solid curve)
under the parameter
configuration μ = 0.01,
r = 0.05, σ 2 = 0.1, and
K = 1.2. The grey dashed line is
the payoff g : x → (x − K)+
and the value function V0 can be
recovered from the figure by
first following V and after the
intersection the payoff g. The
corresponding optimal
thresholds are x̂ = 3.716 and
x ∗ = 2.010


is natural. In this√ case, the functions ψ r and√ϕr read as ψ r (x) = sinh( 2rx) and
ϕr (x) = exp(− 2rx). The Wronskian Br = 2r. Moreover, the process is in natu-
ral scale, S  (x) = 1, that is, and the speed density reads as m (x) = 2.
As in the previous subsection, the optimal stopping problem reads as

V (x) = sup Ex e−rτ (Xτ − K)+ 1{τ <ζ } .
τ

We verify readily that the assumptions of Theorem 1.1 are satisfied. To determine the
optimal exercise threshold x ∗ , we compute the integrals
 ∞
g(x) ϕr+λ (y)ψr (y)m (y)dy
x
x − K −√2(r+λ)x
 √ √ √
= e 2(r + λ) sinh( 2rx) + 2r cosh( 2rx) ,
λ
 ∞
ψr (x) ϕr+λ (y)g(y)m (y)dy
x
√  
2 sinh( 2rx) −√2(r+λ)x 1
= √ e x −K + √ ,
2(r + λ) 2(r + λ)

on x > K. Now, the state x ∗ is characterized by the identity





√ λ
x − K r + r(r + λ) coth 2rx ∗ = √ ; (3.3)
2(r + λ)

we verify readily that the condition (3.3) has a unique root x ∗ > K. Furthermore, we
find from (3.3) that x ∗ → x̂ as λ → ∞ where x̂ = argmax{ ψgr } > K is characterized
by
√ √
2r(x̂ − K) coth( 2r x̂) = 1.
Appl Math Optim (2012) 66:147–173 169

Fig. 3 The value functions V̂


under the complete information
(black dashed curve) and V
under the information rate
λ = 1.88 (black solid curve).
The grey dashed line is the
payoff g : x → (x − K)+ and
the value function V0 can be
recovered from the figure by
first following the curve V and
then after the intersection the
payoff g. The corresponding
optimal stopping thresholds are
x̂ = 4.386 and x ∗ = 3.887

To end the subsection, we illustrate in Fig. 3 graphically the value functions V , V0


and V̂ under the parameter configuration r = 0.12, λ = 1.88, and K = 2.4.

3.3 Logistic Diffusion and Perpetual American Put

As a generalization of the geometric Brownian setting and an illustration of The-


orem 1.2, we consider the case of perpetual American put with a mean reverting
underlying X. More precisely, let X follow a regular linear diffusion with the in-
finitesimal generator

1 d2 d
A = σ 2 x 2 2 + μx(1 − γ x) ,
2 dx dx
where the exogenous constants μ, γ , σ ∈ R+ . This process is called the logistic dif-
fusion (or the geometric Ornstein-Uhlenbeck process [17] or the radial Ornstein-
Uhlenbeck process [5]) and was made famous in literature of real options at the
latest by [9]. As above, a straightforward computation yields the scale density
2γ μ 2γ μ
− 2μ 2μ

S  (x) = x σ 2 e σ 2 and, consequently, the speed density m (x) =
x 2 x
(σ x)2
x σ2 e σ2

for all x ∈ R+ . The optimal stopping problem is now formulated as



V (x) = sup Ex e−rτ (K − Xτ )+ 1{τ <ζ } , (3.4)
τ

with μ < r and K > 0.


We use Theorem 1.2 to study the optimal exercise threshold x † and the optimal
value functions V and V0 . We know from the literature, see, [17], Lemma 3.4.3 or
[8], Sect. 6.5, that the decreasing solution ϕr and the increasing solution ψr+λ can be
expressed as

⎨ϕr (x) = x b U (b, 2b + 2μ2 , 2μγ x),
σ σ2
⎩ψ (x) = x β M(β, 2β + 2μ 2μγ
, x),
r+λ σ2 σ2
170 Appl Math Optim (2012) 66:147–173

Table 1 The optimal stopping threshold x † for various information rates λ and the smooth pasting thresh-
old x̃ (λ = ∞) of ordinary the ordinary case under the parameter configuration μ = 0.01, r = 0.05,
σ 2 = 0.1, γ = 0.5, and K = 2.4

λ 0.005 0.1 0.5 1 5 10 50 250 ∞

x† 1.837 1.626 1.248 1.150 1.023 0.994 0.956 0.939 0.926

Fig. 4 The value functions Ṽ


under the complete information
(black dashed curve) and V
under the information rate
λ = 0.1 (black solid curve). The
grey dashed line is the payoff
g : x → (K − x)+ and the value
function V0 can be recovered
from the figure by first following
the payoff g and then after the
intersection the curve V . The
corresponding optimal stopping
thresholds are x̃ = 0.926 and
x † = 1.626


where β = ( 12 − σμ2 ) + ( 12 − σμ2 )2 + 2(r+λ)
σ2
and the functions M : R+ → R+ and
U : R+ → R+ are the two linearly independent solutions of the Kummer’s equation,
i.e., the so-called confluent hypergeometric functions of first and second kind, cf. [1],
pp. 504. Due to the analytically demanding nature of the functions ϕr and ψr+λ , we
will now fix a parameter setting and illustrate our results numerically and graphically.
In Table 1 we present the optimal stopping thresholds for different rates λ under the
parameter configuration μ = 0.01, r = 0.05, σ 2 = 0.1, γ = 0.5, and K = 2.4.
The numerical results reported in Table 1 are in line with our main results. In
particular, these numerics indicate that the optimal exercise threshold x † is a decreas-
ing function of the intensity λ and that these thresholds tend to the smooth pasting
threshold x̃ of the ordinary case as λ increases. To end the subsection, we illustrate
graphically in Fig. 4 the value functions V , V0 and Ṽ under the parameter configura-
tion μ = 0.01, r = 0.05, σ 2 = 0.1, γ = 0.5, λ = 0.1 and K = 2.4.

3.4 CEV Process and Perpetual American Put

As another illustration of Theorem 1.2, we consider the perpetual American put when
the underlying dynamics follow a CEV process X with the differential generator
d2
A = 12 x 4 dx 2 and the initial state x > 0. This process is a classical example of an Itô
integral which is a strict local martingale and it is connected to a theory of financial
bubbles, see, e.g., [7]. The boundaries of the state space are classified as follows: the
origin is natural and√ ∞ is entrance, see, e.g., [15].√
Now, the functions ψr and ϕr read
as ψr (x) = x exp(− 2rx −1 ) and ϕr (x) = x sinh( 2rx −1 ). Moreover, the process X
Appl Math Optim (2012) 66:147–173 171

is in natural scale and the density of the speed measure reads as m (x) = x24 . Finally,

the Wronskian Br = 2r.
As in the previous subsection, the optimal stopping problem is written as

V (x) = sup Ex e−rτ (K − Xτ )+ 1{τ <ζ } , (3.5)
τ

with r, K > 0. We use Theorem 1.2 to find the optimal characteristics of this problem.
To determine the optimal stopping threshold x † , we compute the integrals
 x
g(x) ψr+λ (y)ϕr (y)m (y)dy
0
K − x −√2(r+λ)x −1

√ √

= e 2(r + λ) sinh 2rx −1 + 2r cosh 2rx −1 ,
λ
 x (3.6)
ϕr (x) ψr+λ (y)g(y)m (y)dy
0
√  
2 sinh( 2rx −1 ) −√2(r+λ)x −1 Kx
= √ e K −x + √ ,
2(r + λ) 2(r + λ)

on x < K. After elementary manipulations, we find that the threshold x † is charac-


terized by the condition
 

2r
√ −1 √
√ −1
K − x† √ sinh 2rx † + 2r cosh 2rx †
2(r + λ)
λ
√ −1
= Kx † sinh 2rx † ; (3.7)
r +λ

again, we verify readily that (3.7) has a unique root x † < K. Furthermore, we observe
that x † → x̃ as λ → ∞ where x̃ = argmax{ ϕgr } < K is characterized by


2r(K − x̃) coth 2r x̂ −1 = K x̃.

To end the subsection, we illustrate in Fig. 5 graphically the value functions V , V0


and Ṽ under the parameter configuration r = 0.05, λ = 1, and K = 2.4.

4 Concluding Comments

We studied in this paper the optimal stopping problems (1.4) and (1.6) proposed
originally by Dupuis and Wang in [10]. In [10], the authors solve these problems
in the case of perpetual American call with underlying geometric Brownian motion.
As our main result, we proposed a mild set of conditions on the underlying and the
payoff and solved the problems under these conditions. These results are formulated
in Theorems 1.1 and 1.2. After proving the necessary auxiliary results, we proposed in
Sect. 2.2.1 the variational equalities (2.10) and (2.13) and solved them directly using
the Markovian theory of linear diffusions. As a result, we produced candidates (2.14)
172 Appl Math Optim (2012) 66:147–173

Fig. 5 The value functions Ṽ


under the complete information
(black dashed curve) and V
under the information rate λ = 1
(black solid curve). The grey
dashed line is the payoff
g : x → (K − x)+ and the value
function V0 can be recovered
from the figure by first following
the payoff g and then after the
intersection the curve V . The
corresponding optimal stopping
thresholds are x̃ = 0.400 and
x † = 0.548

and (2.12) for the optimal solutions. We also derived these candidates using the free
boundary approach of [10] and established that the approaches are consistent. The
verification phase was carried out in Sect. 3 in the spirit of [10]. In [10], the authors
interpret the signal process N as an exogenous liquidity constraint. In this paper, we
proposed and discussed an alternate interpretation of N as an exogenous information
constraint.
The main contribution of this paper is that it generalizes considerably the results
of [10] with respect to the underlying and the payoff structure. In comparison to [2],
Theorem 3, we made additional assumptions on the limiting behavior of these func-
tions and on the integrability of the payoff g. However, these additional assumptions
are not very restrictive from the applications point of view. In this sense, this study
shows that the introduction of an independent Poissonian signal process N to the
problem lowers the degree of solvability of the problem only slightly. Moreover, we
avoided making any a priori differentiability assumptions in deriving the candidates
in Sect. 2.2.1. In fact, we saw that the smoothness properties of the values can be seen
as a consequence of the variational inequalities.

Acknowledgements An anonymous referee is gratefully acknowledged for careful reading and for a
number of very helpful comments. The author thanks Prof. Luis H.R. Alvarez, Prof. Fred Espen Benth,
Dr. Paul C. Kettler, Prof. Andreas E. Kyprianou and Prof. Paavo Salminen for discussions and comments
on earlier versions of this paper. Financial support from Research Foundation of OP-Pohjola group and the
project “Electricity markets: modelling, optimization and simulation (EMMOS)” funded by the Norwegian
Research Council under grant 205328/v30 is acknowledged. Prof. Esko Valkeila and Department of Math-
ematics and System Analysis in Aalto University School of Science and Technology are acknowledged for
hospitality.

References

1. Abramowitz, M., Stegun, I.: Handbook of Mathematical Functions. Dover, New York (1968)
2. Alvarez, L.H.R.: Reward functionals, salvage values and optimal stopping. Math. Methods Oper. Res.
54(2), 315–337 (2001)
3. Alvarez, L.H.R., Stenbacka, R.: Adoption of uncertain multi-stage technology projects: a real options
approach. J. Math. Econ. 35, 71–97 (2001)
Appl Math Optim (2012) 66:147–173 173

4. Alvarez, L.H.R., Stenbacka, R.: Takeover timing, implementation uncertainty, and embedded disin-
vestment options. Rev. Finance 10, 417–441 (2006)
5. Borodin, A., Salminen, P.: Handbook on Brownian Motion—Facts and Formulæ. Birkhäuser, Basel
(2002)
6. Boyarchenko, S., Levendorskiĭ, S.: Exit problems in regime-switching models. J. Math. Econ. 44,
180–206 (2008)
7. Cox, A.M.G., Hobson, D.G.: Local martingales, bubbles and option prices. Finance Stoch. 9, 477–492
(2005)
8. Dayanik, S., Karatzas, I.: On the optimal stopping problem for one-dimensional diffusions. Stoch.
Process. Appl. 107(2), 173–212 (2003)
9. Dixit, A., Pindyck, R.: Investment Under Uncertainty. Princeton University Press, Princeton (1994)
10. Dupuis, P., Wang, H.: Optimal stopping with random intervention times. Adv. Appl. Probab. 34, 141–
157 (2002)
11. Dynkin, E.: Markov Processes I. Springer, Berlin (1965)
12. Guo, X.: An explicit solution to an optimal stopping problem with regime switching. J. Appl. Probab.
38, 464–481 (2001)
13. Guo, X., Liu, J.: Stopping at the maximum of geometric Brownian motion when signals are received.
J. Appl. Probab. 42, 826–838 (2005)
14. Guo, X., Miao, J., Morellec, E.: Irreversible investment with regime shift. J. Econ. Theory 122, 37–59
(2005)
15. Jeanblanc, M., Yor, M., Chesney, M.: Mathematical Methods for Financial Markets. Springer, London
(2009)
16. Jiang, Z., Pistorius, M.R.: On perpetual American put valuation and first-passage in a Regime-
Switching model with jumps. Finance Stoch. 12, 331–355 (2008)
17. Johnson, T.C.: The optimal timing of investment decisions. Ph.D. Thesis, King’s College, London
(2006)
18. Lempa, J.: A note on optimal stopping of diffusions with a two-sided optimal rule. Oper. Res. Lett.
38, 11–16 (2010)
19. Øksendal, B.: Stochastic Differential Equations, 5th edn. Springer, Berlin (2000)
20. Peskir, G., Shiryaev, A.: Optimal Stopping and Free Boundary Problems. Birkhäuser, Basel (2006)
21. Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes and Martingales, vol. 1. Cambridge Uni-
versity Press, Cambridge (2001)
22. Rogers, L.C.G., Zane, O.: A simple model of liquidity effects. In: Advances in Finance and Stochas-
tics: Essays in Honour of Dieter Sondermann, pp. 161–176. Springer, Berlin (2002)
23. Salminen, P.: Optimal stopping of one-dimensional diffusions. Mat. Nachr. 124, 85–101 (1985)
24. Shepp, L., Shiryaev, A.: The Russian option: reduced regret. Ann. Appl. Probab. 3, 631–640 (1993)
25. Shiryaev, A.: Probability, 2nd edn. Springer, Berlin (1996)
Copyright of Applied Mathematics & Optimization is the property of Springer Science & Business Media B.V.
and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for individual use.

You might also like