Zero-Sum Dynkin Games Under Common and Independent Poisson Constraints

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

ZERO-SUM DYNKIN GAMES UNDER COMMON AND

INDEPENDENT POISSON CONSTRAINTS


DAVID HOBSON, GECHUN LIANG, EDWARD WANG∗

Abstract. Zero-sum Dynkin games under the Poisson constraint have been studied widely in
the recent literature. In such a game the players are only allowed to stop at the event times of a
Poisson process. The constraint can be modelled in two different ways: either both players share
the same Poisson process (the common constraint) or each player has her own Poisson process (the
arXiv:2411.07134v1 [math.OC] 11 Nov 2024

independent constraint). In the Markovian case where the payoff is given by a pair of functions
of an underlying diffusion, we give sufficient conditions under which the solution of the game (the
value function, and the optimal stopping sets for each player) under the common (respectively, inde-
pendent) constraint is also the solution of the game under the independent (respectively, common)
constraint. Roughly speaking, if the stopping sets of the maximiser and minimiser in the game un-
der the common constraint are disjoint, then the solution to the game is the same under both the
common and the independent constraint. However, the fact that the stopping sets are disjoint in the
game under the independent constraint, is not sufficient to guarantee that the solution of the game
under the independent constraint is also the solution under the common constraint.

Key words. Zero-sum Dynkin game, common Poisson constraint, independent Poisson con-
straint.

AMS subject classifications. 60G40, 91A05, 49L20.

1. Introduction. A zero-sum Dynkin game is a game played by two players, a


maximiser and a minimiser, who each choose a stopping time to maximise/minimise
the expected payoff. The two players are then denoted as the sup player and the
inf player. If the sup player is first to stop (at τ ) then the payment is Lτ ; if the
inf player is first to stop (at σ) then the payment is Uσ ; under a tie (τ = σ) then
the payment is Mτ . Dynkin games were first introduced by Dynkin [10], and later
extended in a discrete time setup to the above form by Neveu [27], and in a continuous
time setup by Bismut [2]. Since then, there have been multiple works on discrete time
or continuous time Dynkin games. For example, Cvitanic and Karatzas [4] utilized
a backward stochastic differential equation (BSDE) approach, Ekström and Peskir
[11] considered Dynkin games under a Markovian setup (in the sense that the payoffs
are disounted functions of an underlying diffusion) and Kifer [19] gave an extensive
survey with applications to financial game options. An extension of zero-sum Dynkin
games allows for randomized strategies. For relevant literature on this topic, refer to
Laraki and Solan [20], Rosenberg et al. [29], and Touzi and Vieille [31]. More recent
work in this area includes De Angelis et al. [6, 7, 8].
In this paper, we consider Dynkin games with an extra constraint on the players’
stopping strategies. The idea is that both players may only stop when they receive
a signal generated by a Poisson process which is independent of the payoff processes.
Optimal stopping problems with this type of constraint were first studied by Dupuis
and Wang [9] and further developed in Lempa [21], Liang [23], Hobson [15], Hobson
and Zeng [16] and Alvarez et al [1]. For Dynkin games, Pérez et al [28] applied the
constraint to one of the players but most of the literature on Dynkin games under
constrained stopping focuses on the case where both players are constrained. This has
been done by following one of two approaches: either let both players use a common
signal process or assign independent signal processes to each player. Under the first
∗ Department of Statistics, University of Warwick, Coventry, CV4 7AL, U.K. Email address:

[email protected]; [email protected]; [email protected]


1
2

approach the players share the same constraint, and we denote this approach as the
‘common Poisson constraint’ case. Liang and Sun [24] analysed Dynkin games with
the common Poisson constraint under the ‘order condition’ L ≡ M ≤ U and proved
the existence of a saddle point via a BSDE approach, see also Hobson et al [17] for an
example under the ‘generalised order condition’ L ∧ U ≤ M ≤ L ∨ U . (This condition
was also used in the studies by Stettner [?, Theroem 3], Merkulov [26, Chapter 5] and
Guo [14] of the existence of the game value in the unconstrained case.) Under the
second approach the Poisson processes describing the constraints for the two players
are independent and we denote this approach the ‘independent Poisson constraint’.
Dynkin games under an independent Poisson constraint are studied in Liang and Sun
[25], Lempa and Saarinen [22] and Gapeev [12].
In this paper we consider a perpetual zero-sum Dynkin game in a Markovian
setting in which both players are constrained to choose stopping times taking values
in the event times of a Poisson process. The main problem we investigate is whether
the Dynkin game is the same in the common and the independent constraint setups (in
the sense that the value function and the stopping sets of each player do not depend
on whether we work in the common or independent constraint problem). Our study
is motivated by a finite-horizon example presented in Liang and Sun [25, Remark 5.2]
in which the game value and optimal strategies of the two players are the same in the
two setups, and we wanted to understand the extent to which this result is true in
general.
In the main setting we study the payoffs are based on discounted functions of a
time-homogeneous diffusion X. In particular L is given by Lt = e−rt l(Xt ) and U is
given by Ut = e−rt u(Xt ) for a pair of non-negative functions l and u defined on the
state space of X, and we take M ≡ L. We consider two constrained optimal stopping
games, in one each player can only stop at event times of a common Poisson process
(of rate λ > 0), and in the other each player is constrained to only stop at event
times of their own, individual Poisson process (where this pair of Poisson processes is
independent, and each is of rate λ). In each of these settings (common constraint and
independent constraint) it is reasonable to expect that the value function is a function
of the initial value of the underlying diffusion, and the optimal stopping rules are for
the sup player to stop at the first event time of their Poisson process (or, in the
common constraint problem, at the first event time of the common Poisson process)
at which the diffusion is in some (time-independent) set A, and for the inf player to
stop at the first event time of their Poisson process (or, in the common constraint
problem, at the first event time of the common Poisson process) at which the diffusion
is in some other (time-independent) set B. We give results (Theorems 5.8 and 5.9)
to show that under some technical assumptions the solution of the game does indeed
take this form. Indeed, if v I is the value of the game under the independent constraint
(expressed as a function of the initial value), then an optimal stopping set for the sup
player is AI = {x : v I (x) < l(x)}, while an optimal stopping set for the inf player is
B I = {x : v I (x) ≥ u(x)}. Furthermore, if v C denotes the value of the game under the
common constraint (also as a function of the initial value), an optimal stopping set for
the sup player can be chosen as AC = {x : v C (x) < l(x)} ∪ {x : v C (x) > l(x) > u(x)}
and an optimal stopping set for the inf player as B C = {x : v C (x) ≥ u(x)}. Note the
difference in form of the stopping set for the sup player: in the common constraint
problem the sup player additionally stops if the inf player intends to stop and the
sup player can get a higher payoff by stopping themselves.
Suppose that the solution to the Dynkin game under the common constraint is
Zero-sum Dynkin games 3

of the form (v C , AC , B C ). We ask: under what conditions is this also the solution
of the game under the independent constraint. A sufficient condition is that {v C ∧
l ≤ u}. Note that under the order condition l ≤ u this is automatically satisfied.
Equivalently the condition may be written as AC and B C are disjoint. Now, suppose
that the solution to the Dynkin game under the independent constraint is of the form
(v I , AI , B I ). We ask: under what conditions is this also the solution of the game under
the common constraint. A sufficient condition is that {v I ∧ l ≤ u}, and again, under
the order condition this is always satisfied. However, this condition is not equivalent
to the fact that AI and B I are disjoint.
The rest of the paper is structured as follows. In Section 2 we introduce Dynkin
games under the common and the independent constraints in a diffusion setting.
Our main results on constrained zero-sum Dynkin games are presented in Section 3.
Section 4 contains two examples. In the first example, the optimal stopping sets
for the problem under the common constraint are disjoint, and therefore the value
function and stopping sets of the independent constraint game are the same as those
for the common constraint. Then, in a second example we show that the converse
is not true—in this example we give the game value and (disjoint) stopping sets for
the game under the independent constraint but these do not define the solution for
the game under the common constraint. Finally, in Section 5, we prove existence
results for zero-sum Dynkin games in the diffusion setting under the common and
the independent constraints. This allows us to give a large family of problems for
which we know that there exist solutions (with time-homogeneous stopping sets) to
the game with either the common or independent constraint, at which point our main
result may be applied. A key element of our modelling is that we assume an infinite
horizon problem. This means that the standard finite-horizon BSDE methodology of,
for example, Liang and Sun [25] does not apply. Although we borrow ideas from [25]
we also need new ideas to deal with the infinite horizon and unbounded solutions. As a
consequence some of our BSDE results over the infinite horizon may be of independent
interest.

2. Setup. We work on a filtered probability space (Ω, F, F = (Ft )t≥0 , P) satis-


fying the usual conditions, and consider a zero-sum Dynkin game. The payoff process
for the sup player is given by L = (Lt )t≥0 (where L is a progressively measurable
process), and we denote their stopping time by τ , so that if they stop first the payoff
of the game is Lτ . Similarly, the payoff process for the inf player is given by the
progressively measurable process U = (Ut )t≥0 , and we denote their stopping time by
σ, so that if they stop first the payoff of the game is Uτ . It is quite common in the
literature to assume the order condition L ≤ U , but we make no such assumption. If
both players stop at the same time (τ = σ) we assume that the stopping of the sup
player takes precedence (so the payoff is Lτ ). It is very possible to consider other
conventions on the set τ = σ (for example, the payoff is Uσ ) without changing the
spirit of our results. However, the details would be slightly different, and the main
reason we do not consider these alternate conventions is to keep the number of cases
under control. Finally, if neither player stops at a finite time (i.e. τ = ∞ = σ) we as-
sume that the payoff is given by an F∞ -measurable random variable M∞ . Candidate
examples are M∞ = 0 or M∞ = lim Lt assuming that this limit exists. In summary,
t→∞
the payoff of the game R = R(τ, σ) is given by

(2.1) R(τ, σ) = Lτ 1{τ ≤σ<∞} + Uσ 1{σ<τ } + M∞ 1{τ =σ=∞} ,


4

and the expected payoff is defined by J(τ, σ) = E[R(τ, σ)].


In a general Dynkin game τ and σ can be any stopping times. Instead, we
introduce constraints to players’ stopping strategies by only allowing them to stop
when they receive a signal. The signal is modelled by an independent Poisson process.
We want to consider modelling this in two ways.
The first approach is to assume that both players have the same signal process,
which we denote by the ‘common Poisson constraint’, or the ‘common constraint’ as
shorthand. In this case, we assume that the filtered probability space (Ω, F, F, P)
supports a Poisson process with constant intensity λ > 0 which is independent of the
payoff processes L and U (and terminal random variable M∞ ). We denote the jump
times of the Poisson process by {TnC }1≤n≤∞ , with T∞
C
= ∞. We assume τ, σ ∈ RC (λ),
where

RC (λ) = {γ: γ a stopping time such that γ(ω) = TnC (ω) for some n ∈ {1, . . . ∞}}.

The upper and lower values of the Dynkin game under the common Poisson constraint
are defined as:

(2.2) vC = inf sup J(τ, σ), vC = sup inf J(τ, σ).


σ∈RC (λ) τ ∈RC (λ) C
τ ∈RC (λ) σ∈R (λ)

If v C = v C then we say the game has a value v C where v C = v C = v C . A pragmatic


approach to finding a solution is to try to find a saddle point, i.e. to find a pair
(τ ∗ , σ ∗ ) ∈ RC (λ) × RC (λ) such that J(τ, σ ∗ ) ≤ J(τ ∗ , σ ∗ ) ≤ J(τ ∗ , σ) for all (τ, σ) ∈
RC (λ) × RC (λ). Then,

vC ≤ sup J(τ, σ ∗ ) ≤ J(τ ∗ , σ ∗ ) ≤ inf J(τ ∗ , σ) ≤ v C .


τ ∈RC (λ) σ∈RC (λ)

Since trivially v C ≤ v C , we conclude that v C = v C and the game has a value.


The second approach is to assign separate signal processes for the two players, and
we call this the ‘independent Poisson constraint’ case, or the ‘independent constraint’
as shorthand. In this case, we assume that the probability space (Ω, F, F, P) supports
two independent Poisson processes, both1 with intensity λ, such that the two Poisson
processes are independent of the payoffs L, U, M∞ . We denote the jump times of the
Poisson process by {TnM AX }1≤n≤∞ and {Tnmin }1≤n≤∞ respectively, with T∞ M AX
=
min M AX min
T∞ = ∞. We assume τ ∈ R (λ) and σ ∈ R (λ), where

RM AX (λ) = {γ: γ a stopping time such that γ(ω) = TnM AX (ω) for some n ∈ {1, . . . ∞}};
Rmin (λ) = {γ: γ a stopping time such that γ(ω) = Tnmin (ω) for some n ∈ {1, . . . ∞}}.

The upper and lower values of the Dynkin game under the independent Poisson con-
straint are defined as:

(2.3) vI = inf sup J(τ, σ), vI = sup inf J(τ, σ).


σ∈Rmin (λ) τ ∈RM AX (λ) τ ∈RM AX (λ) σ∈R
min (λ)

The value v I and saddle point of the game can be defined in the same way as in the

1 Of course, in this case the players may have signal processes with different rates, but we focus

on the case where the rates are the same because we want to compare to the common constraint
problem.
Zero-sum Dynkin games 5

first approach.
One important example is when the filtered probability space (Ω, F, F = (Ft )t≥0 , P)
supports a regular linear diffusion process X driven by a one dimensional Brownian
motion W . Let the interval X ⊂ R be the state space of X. We assume that the
process X does not die inside X and its endpoints are not exit points (see [3, Chapter
2] for the classification of endpoints). The payoff processes L and U are given by
discounted functions of the diffusion X so that Lt = e−rt l(Xt ) and Ut = e−rt u(Xt )
for some r > 0 and non-negative functions l and u defined on X .
Throughout the paper we denote by Px the probability measure P conditioned on
the initial state X0 = x, and let Ex be the expectation with respect to Px , and use
superscript x on random variables to denote the initial state if appropriate. In this
setting we consider a family of zero-sum Dynkin game indexed by the initial point x
of the diffusion. Then (2.1) (indexed by x) becomes

(2.4) Rx (τ, σ) = e−rτ l(Xτx )1{τ ≤σ<∞} + e−rσ u(Xσx )1{σ<τ } + M∞


x
1{τ =σ=∞}
x
where the payoff on {τ = σ = ∞} may also depend on x (for example, via X∞ if this
limit exists). We still talk about the Dynkin game under a Poisson constraint, but
now v C = {v C (x)}x∈X is a value function.
By extension we can construct a family of game values, one for each initial value
of X, and therefore define functions v C , v I : X 7→ R representing the game values
under the common and the independent Poisson constraints respectively.
Finally, we introduce the first hitting time. For arbitrary initial state X0 =
C C
x ∈ X and Borel set D ⊂ X , define ηD = ηD (x) = inf{TnC : n ≥ 1 : XTx C ∈ D}.
n
C
ηD is the first event time of the Poisson process (with jump times {TnC }1≤n≤∞ )
such that the diffusion process X at this time is in the set D. Similarly, under the
M AX min
independent Poisson constraint, we can define first hitting times ηD , ηD using
the corresponding Poisson processes in the same way.
3. Equivalence of Dynkin games under different constraints. In this sec-
tion we assume that we have the solution to a zero-sum Dynkin game under either
the common or independent constraint, and try to give conditions under which this
solution is also the solution under the alternative constraint. We assume that the
given solution takes a particular time-homogeneous form (see Assumption 3.1). If the
problem is in the setting of a time-homogeneous diffusion then the strong Markov
property, together with the form of the payoffs, mean that we expect that the optimal
strategies for the two players will take the forms of first hitting times. (Moreover,
these first hitting times will be first hitting times of sets which can be defined via
the relative values of the payoff functions l and u together with the value function
v C or v I .) Under some technical assumptions we prove in Section 5 below that this
is indeed the case for a wide class of examples. But here, we try to make minimal
assumptions on the solution to the problem, and try to derive the main results in a
general setting. This allows us to focus on the important properties of the game value
under which the main results are true.
3.1. From the common constraint to the independent constraint. Through-
out this subsection we make the following pair of assumptions on the Dynkin game
under the common constraint:
Assumption 3.1. The Dynkin game under the common Poisson constraint has a
value function {v C (x)}x∈X , with saddle point (ηA
C C
C , ηB C ), where A
C
= {x : v C (x) <
C C C
l(x)} ∪ {x : v (x) > l(x) > u(x)}, B = {x : v (x) ≥ u(x)}.
6

Assumption 3.2. For every x ∈ X , the random variable M∞ and stopping


C
times ηA C
C , ηB C satisfy e E [M∞ 1ηCC ∧ηCC =∞ ] = Ex [M∞ 1ηCC ∧ηCC =∞ |Fγ ] for
−rγ Xγx
A B A B
any γ ∈ RC (λ). Here AC and B C are the sets defined in Assumption 3.1.
Remark 3.3. In Assumption 3.1 the inequalities are chosen such that if the
functions u, l and v C are continuous then AC is open and B C is closed. This choice
is useful when we consider whether the sufficient condition we give in Corollary 3.12
is also necessary (see Remark 3.13). The condition in Corollary 3.12 would remain
sufficient if we changed the definitions of AC and B C to allow equality in place of strict
inequality or vice-versa (for example to make both AC and B C open). Moreover, the
C C
argument in Section 5 would still apply and (ηA C , ηB C ) would remain a saddle point

under such a change. Nonetheless, the definitions of AC and B C given in Assumption


3.1 are convenient and allow us to fix ideas.
For typographical reasons we will sometimes write A (respectively B) as short-
hand for AC (respectively B C ). However, in the statements of the results or when
confusion might arise we write AC and B C in full.
We have already argued (and will later prove) that Assumption 3.1 is satisfied in
a wide class of constrained zero-sum Dynkin games. The next lemma gives several
situations in which Assumption 3.2 is satisfied.
Lemma 3.4. Assumption 3.2 holds if any of the following conditions is satisfied:
C C x
1. ηA C ∧ ηB C is finite P -a.s. for every x ∈ X ;

2. M∞ = 0;
3. M∞ = lim e−rt m(Xt ), for some non-negative function m on X , where
t→∞
we further assume that the limit exists Px -a.s. for every x ∈ X , and that
Ex [supt≥0 e−rt m(Xt )] < ∞ for every x ∈ X .
C C
Proof. If ηA ∧ ηB is finite Px -a.s. for every x ∈ X , then the set {ηAC C
∧ ηB = ∞} is
a null set regardless of initial condition, hence the required equality follows. Similarly,
if M∞ = 0, the equality is also trivial.
In the final case, by strong Markov property:

Ex [M∞ 1ηAC ∧ηBC =∞ |Fγ ]


= Ex [ lim e−r(t+γ) m(Xt+γ )1ηAC ∧ηBC >t+γ |Fγ ]
t→∞
=
t→∞
1
lim e−rγ Ex [e−rt m(Xt+γ ) ηAC ∧ηBC >t+γ |Fγ ]

1
x
= lim e−rγ EXγ [e−rt m(Xt ) ηAC ∧ηBC >t ]
t→∞

1 1
x x
= e−rγ EXγ [ lim e−rt m(Xt ) ηAC ∧ηBC >t ] = e−rγ EXγ [M∞ ηAC ∧ηBC =∞ ],
t→∞

where the swap of limit and expectation is justified by the integrability assumption.
Example 3.5. We provide a canonical example where Assumptions 3.1 and 3.2
C C
hold. Note that the optimal strategies ηA , ηB are not finite but Assumption 3.2 holds
trivially since M∞ = 0.
Let X be a geometric Brownian motion with drift µ and volatility σ, such that
σ2 2

2 < r. Suppose µ < σ2 . Then X = (0, ∞). Suppose l is given by l(x) = (x − P )+


for some P > 0. Note that L∞ := lim e−rt l(Xt ) exists and that L∞ = 0. By
t→∞
[17, Proposition 4.5], the optimal stopping problem supτ ∈RC (λ) Ex [e−rτ l(Xτ )] has a
C ∗
value V (x) and the optimal stopping time is given by η[x∗ ,∞) for some x > 0, with
∗ ∗
V (x) > l(x) on (0, x ) and V (x) < l(x) on (x , ∞). Suppose u is given by u(x) =
(l ∨ V )(x) + P and take M∞ = 0. (This is very reasonable in this problem since L∞
Zero-sum Dynkin games 7

and U∞ := lim e−rt u(Xt ) both exist and are equal to zero.)
t→∞
It follows from [17, Theorem 2.6] (and is easy to believe) that we have a saddle
point of the form stated in Assumption 3.1, with v C = V , A = AC = [x∗ , ∞) and
2
B = B C = ∅. Given that µ < σ2 we have Px (ηA C
= ∞) > 0 and Px (ηB C
= ∞) = 1 for
every x ∈ X .
C C
Example 3.6. In this example, ηA = ηB = ∞ a.s. and M∞ ̸= 0 a.s.. Nonethe-
less, Assumption 3.1 and 3.2 are both satisfied.
Rt Let X satisfy dXt = rXt dt+dWt with
r > 0 and X0 = x, so that e−rt Xt = x + 0 e−rs dWs which converges a.s. as t ↑ ∞ to
a random variable which has a non-centred Gaussian distribution. Suppose l(x) = |x|.
Then L = (Lt )t≥0 given by Lt = e−rt l(Xt ) is a square integrable submartingale for any
initial value x with limit L∞ := lim Lt and V (x) := supτ ∈R(λ) Ex [Lτ ] = Ex [L∞ ] =
t→∞
E[|Gx,1/2r |] where Ga,b is a Gaussian random variable with mean a and variance b.
Choose c > E[|G0,1/2r |]. Set u(x) := l(x) + c and note that u(x) > V (x). Suppose
R∞
M∞ := L∞ = E[|x + 0 e−rs dWs |].
Now consider the game defined via payoffs L, U and M∞ . Then the value function
is given by v C = V . Moreover the optimal strategy for both the sup and inf players is
to never stop, so that AC = B C = ∅. Then Assumption 3.2 holds by the third element
of Lemma 3.4 and the square integrability of the submartingale L.
We begin our study with a lemma which compares the expected payoff of the
game under the independent constraint for two different stopping rules. The proof is
given in the Appendix and relies on a coupling of marked Poisson processes.
Lemma 3.7. Suppose D and E are disjoint Borel subsets of X . Fix x ∈ X .
Suppose that τ ∈ RM AX (λ) satisfies Px (τ = ∞ or Xτ ∈ D) = 1. Then (τ ∧
M AX min
ηE , Xτ ∧ηEM AX ) and (τ ∧ηE , Xτ ∧ηEmin ) have the same distribution and J x (τ, ηE
min
)=
x M AX
J (τ, ηE ).
Similarly, given σ ∈ Rmin (λ) such that Px (σ = ∞ or Xσ ∈ E) = 1, then (ηD M AX

min x M AX
σ, XηD M AX ∧σ ) and (η
D ∧ σ, Xη min ∧σ ) have the same distribution and J (η
D D , σ) =
J x (ηD
min
, σ).
M AX
Taking τ = ηA C and E = B C in Lemma 3.7, and using the fact that J x (ηD C C
, ηE )=
x M AX M AX x min min
J (ηD , ηE ) = J (ηD , ηE ) for any Borel sets D, E, we have:
Corollary 3.8. Suppose that the Dynkin game under the common Poisson
constraint satisfies Assumption 3.1, with AC ∩ B C = ∅. Then ηA C C
C ∧ ηB C and ηAC
M AX

min C x C C x M AX min
ηB C are equally distributed and v (x) = J (ηAC , ηB C ) = J (ηAC , ηB C ) for every
x ∈ X.
C C
Note that Assumption 3.2 is about the properties of M∞ on the set {ηA C ∧ ηB C =
C C M AX min
∞}. Given that ηAC ∧ ηB C and ηAC ∧ ηB C have the same distribution, we may
use Assumption 3.2 to deduce that an analogous statement automatically holds in the
independent constraint setting:
Corollary 3.9. Suppose that Assumption 3.2 holds. Then, for every x ∈ X , the
random variable M∞ satisfies e−rγ EXγ [M∞ 1ηMCAX ∧ηmin ] = Ex [M∞ 1ηMCAX ∧ηmin
x

C =∞ C =∞
|Fγ ],
A B A B
for any γ ∈ RM AX (λ) or Rmin (λ).
For a Borel set D and for any τ ∈ RM AX (λ) we define γτ,D
M AX
to be the first event
time in (TnM AX )n≥1 from τ onwards such that the diffusion process X is in the set
M AX
D: γτ,D = inf{t ≥ τ : t ∈ (TjM AX )j≥1 , Xt ∈ D}. Note that γτ,D
M AX
∈ RM AX (λ).
min min min
Similarly, for any σ ∈ R (λ), define γσ,D = inf{t ≥ σ : t ∈ (Tj )j≥1 , Xt ∈ D} ∈
Rmin (λ).
Lemma 3.10. Suppose that the Dynkin game under the common Poisson con-
straint satisfies Assumption 3.1 and Assumption 3.2. Fix x ∈ X . Then, for any
8

τ ∈ RM AX (λ) and any σ ∈ Rmin (λ), the following equalities hold:

e−rτ v C (Xτx )1{τ <ηmin


C }∩{τ <γ
M AX }
C
= Ex [R(γτ,A
M AX
C , ηB C )|Fτ ]1{τ <η min }∩{τ <γ M AX } ;
min
C C
B τ,A B τ,A

e −rσ C
v 1
(Xσx ) {σ<ηMCAX }∩{σ<γ minC } = E x M AX
[R(ηA C
min
, γσ,B 1
C )|Fσ ] {σ<η M AX }∩{σ<γ min } .
A σ,B AC σ,B C

Proof. The proof follows by strong Markov property and is given in the Appendix.
Theorem 3.11. Suppose that the Dynkin game under the common Poisson con-
straint satisfies Assumption 3.1 and Assumption 3.2 and let the solution be given by
(v C , AC , B C ).
Suppose further that the value function is such that v C ∧ l ≤ u.
Then, the Dynkin game under the independent Poisson constraint has the same
solution as the game under the common Poisson constraint. That is, v I (x) = v C (x)
M AX min
for every x ∈ X , and moreover, (ηA C , ηB C ) is a saddle point of the game under

the independent Poisson constraint.


Corollary 3.12. Suppose that the Dynkin game under the common Poisson
constraint satisfies Assumption 3.1 and Assumption 3.2. Suppose that the optimal
stopping sets for the two players for the game under the common Poisson constraint
are such that the stopping sets are disjoint: AC ∩ B C = ∅. Then the conclusions of
Theorem 3.11 still hold, and the Dynkin game under the independent Poisson con-
straint has the same solution as the game under the common Poisson constraint.
Proof. If AC ∩ B C = ∅ then {x : v C (x) > l(x) > u(x)} = ∅ and {x : l(x) >
v (x) ≥ u(x)} = ∅. Hence v C ∧ l ≤ u. (The converse is also true so that v C ∧ l ≤ u
C

is equivalent to AC ∩ B C = ∅.)
Proof. [Proof of Theorem 3.11] Write A = AC and B = B C . Fix any x ∈
X . We aim to prove that J x (τ, ηB min
) ≤ J x (ηAM AX min
, ηB ) and J x (ηAM AX min
, ηB ) ≤
x M AX M AX min C
J (ηA , σ) for any τ ∈ R (λ) and σ ∈ R (λ). Note that if v ∧ l ≤ u then
{x : v C (x) > l(x) > u(x)} = ∅ so that A = {x : v C (x) < l(x)}.
Step 1: Fix any τ ∈ RM AX (λ). We prove that J x (τ, ηB min
) ≤ J x (γτ,A
M AX min
, ηB ).
We have:

J x (τ, ηB
min
)
= Ex [Lτ 1{τ <ηBmin } + UηBmin 1{ηBmin <τ } + M∞ 1{τ =ηBmin =∞} ]
= Ex [e−rτ l(Xτ )1{τ <ηBmin } 1{Xτ ∈A}
/ + Lτ 1{τ <ηBmin } 1{Xτ ∈A} + UηBmin 1{ηBmin <τ }
+M∞ 1{τ =ηBmin =∞} ]
< E [e x −rτ C
v (Xτ )1{τ <ηBmin } 1{Xτ ∈A}
/ + Lτ 1{τ <ηBmin } 1{Xτ ∈A} + UηBmin 1{ηBmin <τ }
+M∞ 1{τ =ηBmin =∞} ]
M AX 1{γ M AX <η min } + Uη min 1{γ M AX >η min } + M∞ 1{γ M AX =η min =∞} )1{τ <η min } 1{X ∈A}
x
= E [(Lγτ,A τ,A B B τ,A B τ,A B B τ/

+Lτ 1{τ <ηBmin } 1{Xτ ∈A} + UηBmin 1{ηBmin <τ } + M∞ 1{τ =ηBmin =∞} ]
M AX 1{γ M AX =τ <η min }∪{γ M AX <η min ,τ ̸=γ M AX }
x
= E [Lγτ,A τ,A B τ,A B τ,A

+UηBmin 1{τ <ηBmin <γτ,A


M AX }∪{η min <τ ≤γ M AX } + M∞ 1{τ ≤γ M AX =η min =∞} ]
B τ,A τ,A B

M AX 1{γ M AX <η min } + Uη min 1{η min <γ M AX } + M∞ 1{γ M AX =η min =∞} ]
x
= E [Lγτ,A τ,A B B B τ,A τ,A B

= J x (γτ,A
M AX min
, ηB ),

where the inequality is by definition of A, the third equality is by Lemma 3.10 and
M AX min
the fourth and fifth equality are by the property τ ≤ γτ,A and that P(τ = ηB <
Zero-sum Dynkin games 9

∞) = 0.
Step 2: Following step 1, any τ ∈ RM AX (λ) is no better than γτ,A M AX
, therefore
M AX M AX
it is sufficient to prove that ηA is at least as good a strategy as γτ,A , for any τ .
M AX
Note that γτ,A satisfies Px (γτ,A
M AX
= ∞ or Xγτ,AM AX ∈ A) = 1. Define R
M AX
A (λ) =
{τ ∈ RM AX (λ) : Px (γτ,A
M AX
= ∞ or Xγτ,A
M AX ∈ A) = 1}, so for any τ ∈ R
M AX
(λ) the
M AX
corresponding γτ,A ∈ RM
A
AX
(λ). It suffices to prove that supτ ∈RM
A
x
AX (λ) J (τ, η
min
B )=
x M AX min
J (ηA , ηB ) holds. We have:

sup J x (τ, ηB
min
) = sup J x (τ, ηB
M AX
)
τ ∈RM
A
AX (λ) τ ∈RM
A
AX (λ)

≤ sup J x (τ, ηB
M AX
)
τ ∈RM AX (λ)

= J x (ηA
M AX M AX
, ηB )
= J x (ηA
M AX min
, ηB ),

where the first equality holds by Lemma 3.7, the inequality holds as the supremum
is taken over a larger set, the second equality holds by Assumption 3.1, and the final
equality is by Corollary 3.8.
M AX
It is clear that ηA ∈ RM A
AX
(λ), so we also have supτ ∈RM A
x
AX (λ) J (τ, η
min
B ) ≥
x M AX min x min x M AX min
J (ηA , ηB ). Therefore supτ ∈RM A
AX (λ) J (τ, η
B ) = J (ηA , ηB ) holds and
we conclude.
The proof of J x (ηA M AX min
, ηB ) ≤ J x (ηA
M AX
, σ) follows by a similar approach.
M AX min
Hence, (ηA , ηB ) is a saddle point for the game under the independent con-
straint when X0 = x, with game value v I (x) = v C (x) by Corollary 3.8. Given that
the above argument holds for any x ∈ X we can conclude that these properties hold
for any x ∈ X .
Remark 3.13. Suppose further that the functions v C , u and l are all continuous.
With the stopping regions AC and B C defined in Assumption 3.1, by seeking a
contradiction it can be proved that AC ∩ B C = ∅ is also a necessary condition for
the game under the independent Poisson constraint to have the same value as the
game under the common Poisson constraint. Indeed, if AC ∩ B C is nonempty then
J x (ηAM AX
C
min
, ηB x C C
C ) < J (ηAC , ηB C ).

This may not be the case if we make different choices of AC or B C . For example,
if we take ÃC = {x : v C (x) ≤ l(x)} ∪ {x : v C (x) > l(x) > u(x)} instead, then it
could be the case that there exists some x such that u(x) ≤ v C (x) ≤ l(x) holds, so
that ÃC ∩ B C ̸= ∅, but the game under the independent Poisson constraint still has
the same value as the game under the common Poisson constraint. In particular,
without some restrictions on the choices of stopping sets, the condition AC ∩ B C = ∅
of Corollary 3.12 is not necessary for the conclusion of Theorem 3.11 to hold.
Corollary 3.12 tells us that, subject to some regularity conditions at infinity,
if we have solved a Dynkin game under the common Poisson constraint and there
exists optimal stopping regions for the two players which do not overlap, then the
corresponding game under the independent Poisson constraint is also solved, with the
same value and the same optimal stopping regions. The converse of this statement is
not true. That is, if we have solved the game under the independent Poisson constraint
and the optimal stopping regions are disjoint, then the corresponding game under
common Poisson constraint may not have the same value. This is the subject of the
next subsection.
10

3.2. From the independent constraint to the common constraint. Our


goal now is to consider what happens if we begin with a solution of the game under
the independent constraint and ask for sufficient conditions for the solution to also be
the solution of the game under the common constraint. We begin by making similar
assumptions to those in Section 3.1.
Assumption 3.14. The Dynkin game under the independent Poisson constraint
has a value {v I (x)}x∈X , with saddle point (ηA M AX
I
min
, ηB I ), where A
I
= {x : v I (x) <
I I
l(x)} and B = {x : v (x) ≥ u(x)}.
Note that the optimal stopping set for the sup player has a different form to that
in Assumption 3.1, whereas the optimal stopping set for the inf player has the same
form. This asymmetry arises from our convention about what happens when both
players stop simultaneously.
Assumption 3.15. For xevery x ∈ X , the random variable M∞ and stopping times
ηAM AX
I
min
, ηB I satisfy e−rγ EXγ [M∞ 1ηMIAX ∧ηmin
I =∞
] = Ex [M∞ 1ηMIAX ∧ηminI =∞
|Fγ ], for
A B A B
any γ ∈ RM AX (λ) or Rmin (λ), where AI and B I are as defined in Assumption 3.14.
The coupling argument in the previous subsection yields:
Corollary 3.16. Suppose that Assumption 3.15 holds. Then, for every x ∈ X ,
the random variable M∞ satisfies e−rγ EXγ [M∞ 1ηCI ∧ηCI =∞ ] = Ex [M∞ 1ηCI ∧ηCI =∞ |Fγ ],
x

A B A B
for any γ ∈ RC (λ).
By Lemma 3.7, we have the following analogue of Corollary 3.8:
Corollary 3.17. Suppose that the Dynkin game under the independent Poisson
constraint satisfies Assumption 3.14, with AI ∩ B I = ∅. Then ηAC C M AX
I ∧ ηB I and ηAI ∧
min I x M AX min x C C
ηB I are equally distributed and v (x) = J (ηAI , ηB I ) = J (ηAI , ηB I ) for every
x ∈ X.
C C I C
Define γτ,A I = inf{t ≥ τ : t ∈ (Tj )j≥1 , Xt ∈ A } and γσ,B I = inf{t ≥ σ : t ∈
C I
(Tj )j≥1 , Xt ∈ B } in the same way as in the previous subsection. Following a similar
proof as in Lemma 3.10, we get:
Lemma 3.18. Suppose that the Dynkin game under the independent Poisson
constraint satisfies Assumption 3.14 and Assumption 3.15. Fix x ∈ X . Then, for any
τ, σ ∈ RC (λ), the following equalities hold:

e−rτ v I (Xτx )1{τ <ηCI }∩{τ <γ C } = Ex [R(γτ,A


C
I , ηB I )|Fτ ]1{τ <η C }∩{τ <γ C
C
} ;
B τ,AI B I τ,AI

e−rσ v I (Xσx )1{σ<ηCI }∩{σ<γ C } = Ex [R(ηA


C
I , γσ,B I )|Fσ ]1{σ<η C }∩{σ<γ C
C
}.
A σ,B I I
A σ,B I

Theorem 3.19. Suppose that the Dynkin game under the independent Poisson
constraint satisfies Assumption 3.14 and Assumption 3.15.
Suppose that the value function is such that v I ∧ l ≤ u.
Then, the Dynkin game under the common Poisson constraint has the same value
as the game under the independent Poisson constraint. That is, v C (x) = v I (x) for
C C
every x ∈ X . Moreover, (ηA I , ηB I ) is a saddle point of the game under the common

Poisson constraint.
Recall that, in the proof of Theorem 3.11, the first step is to prove that, given
any strategy τ ∈ RM AX (λ), it is always better for the sup player to choose γτ,A M AX
I .

Under the common constraint this is not always the case, because the sup player may
benefit from preempting the inf player’s stopping decision. For arbitrary τ ∈ RC (λ),
C I
consider the set {τ = ηB I < ∞}. It is clear that Xτ ∈ B and γτ,AI > τ holds on
I I
this set (assuming A ∩ B = ∅), and the sup player can either stop at τ and get
C
Lτ , or stop later and get Uτ . So γτ,A I is better in this case only if Uτ ≥ Lτ , which
Zero-sum Dynkin games 11

is guaranteed under the extra assumption that the set {x : v I (x) > l(x) > u(x)} is
empty. This explains why, although the statement of Theorem 3.19 is very similar to
that of Theorem 3.11, the subsequent corollaries are different.
Corollary 3.20. Suppose that the Dynkin game under the independent Poisson
constraint satisfies Assumption 3.14 and Assumption 3.15.
Suppose that the optimal stopping sets for the two players are disjoint: AI ∩
B I = ∅. Suppose further that the set {x : v I (x) > l(x) > u(x)} is empty. Then
the conclusions of Theorem 3.19 still hold, and the Dynkin game under the common
Poisson constraint has the same solution as the game under the independent Poisson
constraint.
Proof. If AI ∩ B I = ∅ and {x : v I (x) > l(x) > u(x)} = ∅ then {x : l(x) > v I (x) ≥
u(x)} = ∅. Hence v I ∧ l ≤ u.
As an example in the next section will show, the condition that {x : v I (x) >
l(x) > u(x)} is empty is necessary, and the fact that AI and B I are disjoint alone is
not sufficient for (v I , AI , B I ) to be a solution of the common constraint problem.
Proof. [Proof of Theorem 3.19] Fix any x ∈ X . We will prove that, given any
strategy τ ∈ RC (λ), γτ,A C
I is a better strategy for the sup player. For typographical

reasons, for the rest of the proof we write A and B as shorthand for AI and B I .
Fix any τ ∈ RC (λ). By a similar argument as in the proof of step 1 in Theorem
3.11, we have:

J x (τ, ηB
C
)
= Ex [Lτ 1{τ <ηBC } + UηBC 1{ηBC <τ } + Lτ 1{τ =ηBC <∞} + M∞ 1{τ =ηBC =∞} ]
< Ex [e−rτ v I (Xτ )1{τ <ηBC } 1{Xτ ∈A}
/ + Lτ 1{τ <ηBC } 1{Xτ ∈A} + UηBC 1{ηBC <τ }
+UηBC 1{τ =ηBC <∞} + M∞ 1{τ =ηBC =∞} ]
C 1{γ C ≤η C ,γ C <∞} + Uη C 1{η C <γ C } + M∞ 1{γ C =η C =∞} ]
x
= E [Lγτ,A τ,A B τ,A B B τ,A τ,A B

= J x (γτ,A
C C
, ηB ),
C
where, for the inequality, Lτ ≤ Uτ holds on the set {τ = ηB < ∞} by the assumption
I
that {x : v (x) > l(x) > u(x)} is empty.
The proof of J x (ηA
C C
, γσ,B ) ≤ J x (ηA C
, σ) does not rely on the extra assumption
I
that the set {x : v (x) > l(x) > u(x)} is empty, hence follows a similar approach as
in Step 1 of Theorem 3.11. Then, by similar argument as in Step 2 of Theorem 3.11,
C C
the saddle point property of (ηA , ηB ) follows, and the value of the game under the
C I
common constraint is v (x) = v (x) by Corollary 3.17. Hence we conclude.
Remark 3.21. Similar to the argument in Remark 3.13, it can be proved that
the sufficient conditions stated in Corollary 3.20 are necessary for the game under
the common Poisson constraint to have the same value, given that AI is open, B I is
closed and the payoff functions and the value function v I are continuous.
A sufficient condition for the set {x : v I (x) > l(x) > u(x)} to be empty is
l(x) ≤ u(x), under which the Dynkin games under the common and the independent
constraints are equivalent. However, this order condition is not a necessary condition
for this extra assumption {x : v I (x) > l(x) > u(x)} = ∅. We will see an example in
the next section where the order condition fails, but the hypotheses of Corollary 3.20
are satisfied.
Corollary 3.22. Suppose that the order condition l ≤ u holds.
Suppose that Assumptions 3.1 and 3.2 hold. If (v C , AC , B C ) is a solution of the
12

game under the common Poisson constraint then (v C , AC , B C ) is also the solution of
the game under the independent Poisson constraint.
Suppose that Assumptions 3.14 and 3.15 hold. If (v I , AI , B I ) is a solution of the
game under the independent Poisson constraint then (v I , AI , B I ) is also the solution
of the game under the common Poisson constraint.
4. An example and a counterexample.
4.1. From the common constraint to the independent constraint: an
example. In this subsection, we construct an example where the players’ optimal
stopping regions are disjoint under the common constraint. As a consequence, Corol-
lary 3.12 tells us the solution (value function and optimal stopping regions of the
game) of the game under the common constraint is also a solution under the indepen-
dent constraint.
Let X be a Brownian motion. Consider the game with payoff functions: l(x) =
1x∈[−1,1] and u(x) = λ+r
λ
1x∈[−1,1] + 1+ϵ
1
1x∈[−1,1]
/ , where ϵ > 0 is sufficiently large (see
(4.2) below). We also take M∞ = 0. (Note that lim e−rt l(Xt ) = 0 = lim e−rt u(Xt )
t↑∞ t↑∞
so this is the natural candidate for the payoff if neither player stops.)
These functions define a Dynkin game with payoff:
(4.1)  
λ 1
R(τ, σ) = e−rτ 1Xτ ∈[−1,1] 1τ ≤σ 1τ <∞ +e−rσ 1Xσ ∈[−1,1] + 1Xσ ∈[−1,1]
/ 1σ<τ .
λ+r 1+ϵ

We solve the game with expected payoff J x (τ, σ) = Ex√[R(τ, σ)] under p the common
constraint. To simplify the expressions, we define θ = 2r and ϕ = 2(λ + r) and
use these notations instead of λ and r when we present the value of the games. Also,
note that the functions l, u and the value are all even functions and our derivation
will focus on the positive real line with the values on the negative real line following
by symmetry.
We assume that
θ2 sinh(ϕ) + θϕ cosh(ϕ)
(4.2) ϵ> .
(ϕ2 − θ2 ) sinh(ϕ)

Standard arguments imply that the value of this game should satisfy the HJB equation

ϕ2 ϕ2 − θ 2
(4.3) LV − V + max{min(V, u), l} = 0,
2 2
where Lf = 12 f ′′ is the infinitesimal generator of a Brownian motion. Therefore our
strategy is to construct a solution to (4.3) and then to verify that this solution is the
value of the game.
We expect that the sup player seeks to stop when |X| is small whereas the inf
player seeks to stop when |X| is moderate but not too large (when |X| is large the inf
player hopes that discounting will reduce the value of any payoff to the sup player
over time, and does not wish to stop). Since l has a jump at ±1 we expect that the
threshold between small and moderate is at ±1. We expect the threshold between
moderate and large to occur at some point x∗ which is characterised by V (x∗ ) = 1+ϵ 1
.
We further expect that the value function of the game will be decreasing on the
positive real line.
Define the function f : (0, ∞) → (0, ∞) by f (x) = θ(θ sinh(ϕx) + ϕ cosh(ϕx)).
Zero-sum Dynkin games 13

Recall the lower bound (4.2) on ϵ.


Lemma 4.1. The equation f (x) = (ϕ2 − θ2 )ϵ sinh(ϕ) has a unique solution, which
we label x̂. Furthermore, x̂ ∈ (1, ∞).
Proof. Observe that f ′ (x) = θϕ(θ cosh(ϕx) + ϕ sinh(ϕx)) > 0 on [0, ∞), and that
lim f (x) = ∞. Therefore f is increasing on (0, ∞) and any solution of f (x) = f0
x↑∞
2
is unique. The assumption that ϵ > θ sinh(ϕ)+θϕ cosh(ϕ)
(ϕ2 −θ 2 ) sinh(ϕ) implies that f (1) < (ϕ2 −
θ2 )ϵ sinh(ϕ). It follows that there exists a unique positive solution x̂ to f (x̂) = (ϕ2 −
θ2 )ϵ sinh(ϕ) and x̂ > 1.
Define the candidate value function
Deθx x < −x∗ ;

 2 2
−ϕx ϕ −θ
x ∈ [−x∗ , −1);
 ϕx 1
 Be + Ce + ϕ2 1+ϵ



2 2
(4.4) V (x) = A cosh(ϕx) + ϕ ϕ−θ2 x ∈ [−1, 1];
ϕ2 −θ 2 1

−ϕx
ϕx
Be + Ce + ϕ2 1+ϵ x ∈ (1, x∗ ];




−θx
x > x∗ .

De

where A, B, C, D and x∗ are to be determined.


Recall that a strong solution of a HJB equation is a twice (weakly) differentiable
function satisfying the equation almost everywhere (See Gilbarg and Trudinger [13]
Chapter 9).
Lemma 4.2. Set x∗ = x̂ where x̂ is as defined in Lemma 4.1 and set

θ2 − θϕ 1 −ϕx∗ ϕ2 − θ2 ϵ −ϕ
A = e − e < 0;
ϕ2 1 + ϵ ϕ2 1 + ϵ
θ2 − θϕ 1 −ϕx∗
B = e < 0;
2ϕ2 1 + ϵ
θ2 + θϕ 1 ϕx∗
C = e > 0;
2ϕ2 1 + ϵ
1 θx∗
D = e > 0.
1+ϵ

Then the function V is nonnegative, even, decreasing on [0, ∞), and is in C 1 . The
first derivative V ′ is bounded, and the second derivative V ′′ exists and is continuous
except at x = ±1. The second left and right derivatives of V exist at x = ±1 and are
finite.
Moreover, V satisfies: V (x∗ ) = 1+ϵ1
, l > u > V on the set [−1, 1], V ≥ u > l on
the set [−x , −1) ∪ (1, x ] and u > V > l on the set (−∞, −x∗ ) ∪ (x∗ , ∞).
∗ ∗

As a result, V is a strong solution of the HJB equation and satisfies (4.3) on


R \ {−1, 1}.
Proof. Given that the functions l, u and V are even functions, it suffices to prove
the results on [0, ∞).
We require that V is C 1 at x = 1 and x = x∗ and that V (x∗ ) = 1+ϵ 1
. These five

conditions are sufficient to fix the five unknowns A, B, C, D and x , and the values
given in the lemma can be derived after some algebra.
Remark 4.3. The candidate value function V is not C 2 by Lemma 4.2. However,
given that V is C 1 and V ′′ fails to exist at a finite number of points, Itô’s formula
is still valid for V (Xt ), in the sense that V (Xt ) can still be written in the form
Rt Rt
V (Xt ) = V (x) + 0 V ′ (Xs )dXs + 0 12 V ′′ (Xs )d[X]s , see [18, Problem 3.6.24].
14

Lemma 4.4. The function V is the value of the game under the common Poisson
C C C
constraint, and (η[−1,1] , η[−x ∗ ,−1)∪(1,x∗ ] ) is a saddle point. That is, v = V , AC =
[−1, 1] and B C = [−x∗ , −1) ∪ (1, x∗ ].
The proof of Lemma 4.4 is fairly standard (although it is for stopping under
a Poisson constraint so not entirely so) and as our proof relies on a result from
the next section it is given in the Appendix. The next result follows immediately
from Corollary 3.12.
Corollary 4.5. V is also the value of the game under the independent Poisson
M AX min
constraint, and (η[−1,1] , η[−x ∗ ,−1)∪(1,x∗ ] ) is a saddle point.

Fig. 4.1. The payoff functions and value of the game with payoff (4.1) under the common
Poisson constraint, where we take λ = r = 1 and ϵ = 9.

4.2. From the independent constraint to the common constraint: a


counterexample. Now we modify the game we solved in the previous subsection
in such a way that the value functions for the independent and common constraint
models no longer agree.
Let V be as defined in (4.4) with the constants as specified in Lemma 4.2. Fix
δ ∈ (0, x 3−1 ) and define ˜l(x) = 1x∈[−1,1] + V (x∗ − δ)1x∈(−x∗ +2δ,−1)∪(1,x∗ −2δ) , and

ũ = u. Then, the payoff of the game is given by

R̃(τ, σ) = e−rτ (1Xτ ∈[−1,1] + V (x∗ − δ)1Xτ ∈(−x∗ +2δ,−1)∪(1,x∗ −2δ) )1{τ ≤σ}
 2
ϕ − θ2

1
(4.5) +e−rσ 1 Xσ ∈[−1,1] + 1 X ∈[−1.1]
/ 1{σ<τ } .
ϕ2 1+ϵ σ

We want to argue that V is the value of the game under the independent Poisson
constraint but not under the common Poisson constraint.
Recall that V is decreasing on [0, ∞) and is an even function, therefore V (x) >
V (x∗ − δ) for x ∈ (−x∗ + 2δ, −1) ∪ (1, x∗ − 2δ). Hence V > ˜l > ũ holds on this set.
2 2 2 2
Therefore the function V satisfies the HJB equation LV − 2ϕ 2−θ V + ϕ −θ
2 max(V, ˜l)+
Zero-sum Dynkin games 15

ϕ2 −θ 2
2 min(ũ, V ) = 0. Therefore, similar as the previous subsection, one can verify
that V is the value function of the game with payoff (4.5) under the independent
M AX min I
Poisson constraint, and that (η[−1,1] , η[−x ∗ ,−1)∪(1,x∗ ] ) is a saddle point. Hence, v =V,
I I ∗ ∗
A = [−1, 1] and B = [−x , −1) ∪ (1, x ].

Fig. 4.2. The payoff functions and value of the game with payoff (4.5) under the independent

Poisson constraint, where we take λ = r = 1, ϵ = 9 and δ = x 4−1 . V is not the value of the game
under the common Poisson constraint as the set the set {x : V (x) > l̃(x) > ũ(x)} is nonempty.

2 2 2
However, V does not satisfy LV − ϕ2 V + ϕ −θ 2 max(min(ũ, V ), ˜l) = 0. Indeed,
on the set (−x + 2δ, −1) ∪ (1, x − 2δ), we have max(min(ũ, V ), ˜l) = ˜l > 1+ϵ
∗ ∗ 1
by
2 2 2
ϕ ϕ −θ ˜
construction, and therefore LV − V + 2 max(min(ũ, V ), l) > 0 as V satisfies
2
2 2 2
LV − ϕ2 V + ϕ −θ2
1
1+ϵ = 0 by Lemma 4.2. Hence V cannot be the value function of
the game with payoff (4.5) under the common constraint and the sup player should
M AX
deviate from η[−1,1] and also stop on the set (−x∗ +2δ, −1)∪(1, x∗ −2δ). In particular,
although A and B are disjoint, this does not imply v I = v C or that (ηA
I I C C
I , ηB I ) is a

saddle point.
5. Existence of solutions for perpetual Dynkin games.
5.1. Zero-sum Dynkin games under the common constraint. In this sec-
tion, we prove an existence and uniqueness result for the value of the Dynkin game
under the common constraint. In this way we show that Assumption 3.1 is satisfied
for a wide class of problems based on payoffs which are functions of an underlying
diffusion. The mathematical innovation in this section is that we consider an infinite
horizon BSDE: most of the extant literature considers a finite horizon BSDE which
is typically simpler.
In this subsection we focus on the game under the common constraint. We will
omit the superscript C in this subsection for notational simplicity.
Instead of considering any filtration that supports the process X and the Pois-
son process we work on a minimal filtration to allow the application of martingale
16

representation theorem. Given the probability space (Ω, F, P) we define the filtration
F∗ = {Ft∗ }t≥0 as the smallest filtration that contains both the natural filtration of
X, FX = {FtX }t≥0 and the natural filtration of the Poisson process, H = {Ht }t≥0 ,
i.e., for each t ≥ 0, Ft∗ = σ(FtX ∪ Ht ). Finally, the filtration F we use is the aug-
mented version of F∗ chosen to satisfy the usual conditions of right-continuity and
completeness with respect to P.
For any T ∈ (0, ∞], we define the spaces
2
F-progressively measurable processes y : ∥y∥2S 2 < ∞ ;

Sa,T =
a,T
2
F-progressively measurable processes Z : ∥Z∥2H2 < ∞ .

Ha,T =
a,T

where the weighted norms are defined as


" # "Z #
T
∥y∥2S 2 =E sup e2at yt2 ; ∥Z∥2H2 =E e 2at
Zt2 dt .
a,T a,T
t∈[0,T ] 0

It is clear that for T < ∞, the above weighted norms are equivalent for any a ∈ R.
However, for T = ∞, the weighted norms become stronger as a increases, and the
corresponding space becomes smaller.
We work in the setting of a time-homogeneous diffusion process X (with initial
value X0 = x) and payoffs which are discounted functions of X = X x .
Assumption 5.1. For fixed initial value x ∈ X the processes l(X) = (l(Xt ))t≥0
and u(X) = (u(Xt ))t≥0 satisfy the following:
2
(1) l(X), u(X) ∈ Hα,∞ for some α > −r, where r > 0 is the discount factor
defined in Section 2.
(2) E[supt≥0 (Lt ∨ Ut )] < ∞, and lim Lt and lim Ut both exist a.s.. We set L∞ =
t↑∞ t↑∞
lim Lt and U∞ = lim Ut .
t↑∞ t↑∞
Note that, (1) will be required for the BSDE argument. It will follow from
Assumptions (1) and (2) that L∞ = 0 = U∞ , as we shall see in Lemma 5.5.
Remark 5.2. A typical example in which the above assumption is satisfied is the
following: let l satisfy |l(x)| ≤ C(1 + |x|), let u satisfy |l(x)| ≤ C(1 + |x|) and let X
be a geometric Brownian motion with X0 = x, drift µ, and volatility σ and suppose
r > max{µ + 21 σ 2 , 0}.
1 2 1 2
Then we have Xt = xe(µ− 2 σ )t+σWt . It follows that e2αt Xt2 = x2 e2(α+µ− 2 σ )t+2σWt .
Let r > max{µ + 21 σ 2 , 0}. In turn, by an application of the inequality |l(x)|2 ≤
2C 2 (1 + |x|2 ),
    
2αt 2 2 1 2
E[e l(Xt ) ] ≤ 2C exp(2αt) + exp 2 α + µ + σ t .
2
R∞
It follows that E[ 0 e2αt l(Xt )2 dt] < ∞ holds if µ+ 12 σ 2 +α < 0 and α < 0, and further
that l(X) ∈ Hα,∞ 2
for some α ∈ (−r, 0). Furthermore, given that µ + 12 σ 2 + α < 0, it
follows that µ < r and the integrability of supt L is equivalent to the integrability of the
maximum of a geometric Brownian motion with negative drift, which is immediate.
It is also immediate that Lt → 0 a.s.. Similar results apply to u and U .
Consider the following infinite horizon BSDE defined on [0, ∞),

(5.1) dyt = − [λ max{l(Xt ), min(yt , u(Xt ))} − (λ + r)yt ] dt + Zt dWt , t ≥ 0,


Zero-sum Dynkin games 17

subject to the asymptotic condition

(5.2) lim E[e2αt yt2 ] = 0,


t↑∞

where α is introduced in Assumption 5.1.


A solution to BSDE (5.1) subject to the asymptotic condition (5.2) is a pair of
F-progressively measurable processes (y, Z) satisfying
Z T Z T
yt = yT + [λ max{l(Xs ), min(ys , u(Xs ))} − (λ + r)ys ] ds − Zs dWs
t t

for 0 ≤ t ≤ T < ∞, and such that the asymptotic condition (5.2) holds. We aim to
2 2
find a solution (y, Z) in the spaces Sα,∞ × Hα,∞ .
The main idea behind solving the BSDE (5.1) subject to the asymptotic condition
(5.2) is to first approximate it by a sequence of finite horizon BSDEs and establish
uniform estimates for their solutions. The existence of a solution to the infinite-horizon
problem (Proposition 5.3) then follows from a fairly standard compactness argument.
However, since we are dealing with unbounded solutions for infinite horizon BSDEs,
whereas the majority of results in this direction only consider bounded solutions, we
provide a proof in the Appendix.
Proposition 5.3. Suppose that Assumption 5.1 holds. Then there exists a unique
2 2
solution (y, Z) ∈ Sα,∞ × Hα,∞ to BSDE (5.1) subject to the asymptotic condition
(5.2).
Lemma 5.4. Let (y, Z) be the unique solution of (5.1) subject to the asymptotic
condition (5.2). Then for n ≥ 0, y is also a solution to the following recursive equation
for n ≥ 0:

(5.3) e−rTn yTn = E[e−rTn+1 max{l(XTn+1 ), min(yTn+1 , u(XTn+1 ))}|FTn ].


2 2
Proof. Given that (y, Z) ∈ Sα,∞ × Hα,∞ with α > −r and that the weighted
2 2
norms ∥·∥S 2 and ∥·∥H2 become stronger as a increases, it follows that (y, Z) ∈
a,∞ a,∞
2 2
S−r,∞ × H−(λ+r),∞ .
Applying Itô’s formula, we obtain the following expression for T > t ≥ 0:
Z T Z T
e−(λ+r)t yt = e−(λ+r)T yT + e−(λ+r)s λ max{l(Xs ), min(ys , u(Xs ))} ds− e−(λ+r)s Zs dWs .
t t

2
By the result that Z ∈ H−(λ+r),∞ ,
the stochastic integral term is square integrable
hence is a uniformly integrable martingale. For T > Tn , utilizing the density of
Tn+1 − Tn conditional on FTn , we have:

e−rTn yTn = E e−λ(T −Tn ) e−rT yT
Z T 
−λ(s−Tn ) −rs
+ e e λ max{l(Xs ), min(ys , u(Xs ))} ds FTn
T
 n
= E e−rT yT 1{Tn+1 ≥T }

−rTn+1
+e λ max{l(XTn+1 ), min(yTn+1 , u(XTn+1 ))}1{Tn+1 <T } FTn .
18

2
Since y ∈ S−r,∞ , the term E[e−rT yT 1{Tn+1 ≥T } ] vanishes as we take T ↑ ∞. Thus,
the result follows by monotone convergence.
We have proved that, under Assumption 5.1, there exists a solution (yTn )n≥0 to
the recursive equation (5.3). In the remaining part of this section, we will show that
the solution to the recursive equation (5.3) defines the value of the game with payoff
R under the common constraint, and an assumption that M∞ = 0. (Note that by
Lemma 5.5 L∞ = 0 = U∞ so that in this case the only natural candidate value for
M∞ is zero.)
We need uniform integrability for the verification of game value, and this is covered
by the following lemma:
Lemma 5.5. Let (y, Z) be the unique solution of BSDE (5.1) subject to the
asymptotic condition (5.2). Define Yt = e−rt yt and Ŷt = max{min{Ut , Yt }, Lt }.
Then hthe following iresults hold:
h i
(1) E supt∈[0,∞) Yt < ∞; E supt∈[0,∞) Ŷt < ∞.
(2) Lt , Ut , Yt , Ŷt → 0 a.s. as t → ∞.
Proof. See Appendix.
In the game under the common constraint, the game starts at time 0 and players
are not allowed to stop immediately, so T1 is the first time when the players can stop.
We want to consider an auxiliary game with the same payoff as defined in (2.1), but
allowing agents an additional opportunity to stop at time 0.
Let T0 = 0 and let R0 (λ) = R(λ) ∪ {T0 }. Then the upper and lower values of
this auxiliary game are:

(5.4) ν0 = inf sup E[R(τ, σ)];


σ∈R0 (λ) τ ∈R0 (λ)

(5.5) ν0 = sup inf E[R(τ, σ)].


τ ∈R0 (λ) σ∈R0 (λ)

We will consider the dynamic versions of the original and auxiliary games. Con-
sider the problem where two players aim to maximize/minimize R(τ, σ) conditioning
on the information at time Tk . Given any τ, σ, the outcome of the problem is a
random variable, and should be maximized/minimized in the sense of essential supre-
mum/infimum. For k ≥ 1 let Rk (λ) = {γ : γ is a F − stopping time such that γ(ω) =
Tn (ω) for some n ∈ {k, . . . ∞}}. (Note that putting k = 0 in this definition recovers
R0 defined above.) Then the set of admissible strategies for the dynamic version
of the original game is Rk+1 (λ), and the set of admissible strategies for the dynamic
version of the auxiliary game is Rk (λ). Hence we define the following upper and lower
values for the dynamic version of the original game as:

(5.6) v Tk = ess inf ess sup E[R(τ, σ)|FTk ];


σ∈Rk+1 (λ) τ ∈Rk+1 (λ)

(5.7) v Tk = ess sup ess inf E[R(τ, σ)|FTk ].


τ ∈Rk+1 (λ) σ∈Rk+1 (λ)

We also define the upper and lower values for the dynamic version of the auxiliary
game as:

(5.8) ν Tk = ess inf ess sup E[R(τ, σ)|FTk ];


σ∈Rk (λ) τ ∈Rk (λ)

(5.9) ν Tk = ess sup ess inf E[R(τ, σ)|FTk ].


τ ∈Rk (λ) σ∈Rk (λ)
Zero-sum Dynkin games 19

Lemma 5.6. Let (y, Z) be the unique solution of (5.1). Let Yt and Ŷt be as
defined in Lemma 5.5, with Ŷ∞ = 0.
Then ŶTk = ν Tk = ν Tk , where ν Tk and ν Tk are as defined in (5.8) and (5.9),
respectively. Moreover, we have ŶTk = E[R(τk∗ , σk∗ )|FTk ], where σk∗ = inf{TN ≥ Tk :
YTN ≥ UTN } and τk∗ = inf{TN ≥ Tk : YTN < LTN or YTN > LTN > UTN }.
Proof. We will later establish that:
(1)(ŶTn ∧τ ∧σk∗ )n≥k is a uniformly integrable supermartingale for any τ ∈ Rk (λ).
(2)(ŶTn ∧τk∗ ∧σ )n≥k is a uniformly integrable submartingale for any σ ∈ Rk (λ).
Before proving these properties, we will first show that conditions (1) and (2) are
sufficient for the desired result. Note that, by Lemma 5.5, R(τ, σ) = Ŷσ∧τ = 0 on the
set {τ = σ = ∞}.
Observe that, on the set {τ ≤ σk∗ }, Ŷτ ≥ Lτ holds by the definition of Ŷτ =
max{min{Uτ , Yτ }, Lτ }. Also, we have Yσk∗ ≥ Uσk∗ on the set {σk∗ < τ } by the definition
of σk∗ , thus Ŷσk∗ = max{min{Uσk∗ , Yσk∗ }, Lσk∗ } ≥ Uσk∗ holds on this set. Hence, using the
property that (ŶTn ∧τ ∧σk∗ )n≥k is a uniformly integrable supermartingale, the optional
stopping theorem implies:

ŶTk ≥ E[Ŷτ ∧σk∗ |FTk ]


≥ E[Lτ 1τ ≤σk∗ + Uσk∗ 1σk∗ <τ |FTk ]
(5.10) = E[R(τ, σk∗ )|FTk ].

Next, we have both Yσ ≥ Lσ and Uσ ≥ Lσ on the set {σ < τk∗ }, hence Ŷσ ≤ Uσ
holds by the identity Ŷσ = max{min{Uσ , Yσ }, Lσ }. We also have Ŷτk∗ ≤ Lτk∗ on the
set {τk∗ ≤ σ}. Indeed, by the definition of τk∗ , there are three possible cases about
the order among Yτk∗ , Lτk∗ and Uτk∗ : Yτk∗ < Lτk∗ ≤ Uτk∗ , Yτk∗ < Lτk∗ with Uτk∗ < Lτk∗ ,
and Yτk∗ > Lτk∗ > Uτk∗ . In any of these three cases, min{Uτk∗ , Yτk∗ } ≤ Lτk∗ holds, hence
Ŷτk∗ ≤ Lτk∗ holds by the identity Ŷτk∗ = max{min{Uτk∗ , Yτk∗ }, Lτk∗ }.
Hence, using the property that (ŶTn ∧τk∗ ∧σ )n≥k is a uniformly integrable sub-
martingale, the optional stopping theorem implies:

ŶTk ≤ E[Ŷτk∗ ∧σ |FTk ]


≤ E[Lτk∗ 1τk∗ ≤σ + Uσ 1σ<τk∗ |FTk ]
(5.11) = E[R(τk∗ , σ)|FTk ].

The inequality (5.10) implies that ŶTk ≥ ess supτ ∈Rk (λ) E[R(τ, σk∗ )|FTk ] ≥ ν Tk . Simi-
larly, we have ŶTk ≤ ν Tk by (5.11). It is obvious that ν Tk ≥ ν Tk , therefore the equality
ŶTk = ν Tk = ν Tk holds and ŶTk = E[R(τk∗ , σk∗ )|FTk ] is immediate.
It remains to prove properties (1) and (2). Uniform integrability of Ŷ follows by
Lemma 5.5. To prove the supermartingale property, we fix n ≥ k. We have:

E[ŶTn+1 ∧τ ∧σk∗ |FTn ] = 1τ ∧σk∗ ≤Tn ŶTn ∧τ ∧σk∗ + 1τ ∧σk∗ ≥Tn+1 E[ŶTn+1 |FTn ].

Observe that, on the set {τ ∧ σk∗ ≥ Tn+1 }, we have Tn < σk∗ . Therefore UTn > YTn
holds on this set, hence ŶTn = max{YTn , LTn } ≥ YTn = E[ŶTn+1 |FTn ] holds on this
set, and this implies the supermartingale property.
For the submartingale property, we have Tn < τk∗ on the set {τk∗ ∧ σ ≥ Tn+1 },
which implies that min{UTn , YTn } ≥ LTn . It follows that ŶTn ≤ YTn holds on this set,
20

hence the submartingale property follows.


Theorem 5.7. Let (y, Z) be the unique solution of BSDE (5.1), and let Yt be as
defined in Lemma 5.5.
Then YTk = v Tk = v Tk , where v Tk and v Tk are as defined in (5.6) and (5.7),
∗ ∗ ∗ ∗
respectively. Moreover, we have YTk = E[R(τk+1 , σk+1 )|FTk ], where τk+1 , σk+1 are as
defined in Lemma 5.6.
In particular, Y0 = y0 is the value of the Dynkin game with payoff R under the
common Poisson constraint, and (τ1∗ , σ1∗ ) is a saddle point of this game.
Proof. Observe that, for any τ ∈ Rk (λ),
∗ ∗
YTk = E[ŶTk+1 |FTk ] ≥ E[E[R(τ, σk+1 )|FTk+1 ]|FTk ]] = E[R(τ, σk+1 )|FTk ],

where the first equality is by (5.3) and the inequality is by (5.10). Similarly, by

(5.11), YTk ≤ E[R(τk+1 , σ)|FTk ] holds for any σ ∈ Rk (λ). Hence, YTk = v Tk = v Tk =
∗ ∗
E[R(τk+1 , σk+1 )|FTk ].
By taking k = 0, we get YT0 = y0 and J x (τ, σ1∗ ) ≤ y0 ≤ J x (τ1∗ , σ) for any
τ, σ ∈ R1 (λ). Hence y0 is the value and (τ1∗ , σ1∗ ) is a saddle point of the game under
the common constraint.
Theorem 5.8. Suppose that X, l and u are such that Assumption 5.1 holds for
each x ∈ X and suppose that M∞ = 0. Then Assumptions 3.1 and 3.2 are satisfied.
Proof. Given that the driver does not directly depend on t the solution to the
infinite horizon BSDE (5.1) has a Markovian representation, which does not depend
on t. This defines a function v C on X such that ytx = v C (Xtx ), where yx denotes the
solution of the system (5.1) for initial value X0 = x.
By Theorem 5.7 and the Markovian representation v C , conditioning on any ini-
tial value x, v C (x) is the value of the Dynkin game with payoff R under the com-
mon constraint, and the saddle point (τ1∗ , σ1∗ ) defined in Theorem 5.7 has the form
σ1∗ = inf{TN ≥ T1 : v C (XTN ) ≥ u(XTN )} and τ1∗ = inf{TN > T1 : v C (XTN ) <
l(XTN ) or v C (XTN ) > l(XTN ) > u(XTN )}. Hence Assumption 3.1 is satisfied.
Since M∞ = 0, Assumption 3.2 is trivially satisfied.
It follows from the results of this section that there are a wide class of examples
such that Assumptions 3.1 and 3.2 hold. It should be noted however, that the class
of examples for which Assumptions 3.1 and 3.2 hold is much wider than the class
covered by Assumption 5.1. Example 3.6 provides an example in this wider class.
5.2. Zero-sum Dynkin games under the independent constraint. The
results for the game for the independent constraint are very similar. We summarize
the most relevant conclusions here.
Theorem 5.9. Suppose that X, l and u are such that Assumption 5.1 holds
for each x ∈ X and suppose that M∞ = 0. Then Assumptions 3.14 and 3.15 are
satisfied.
Proof. The proof follows the proof of all the lemmas and propositions of Sec-
tion 5.1, leading ultimately to the analogous conclusion to Theorem 5.9, namely this
theorem. There are two main changes: firstly, rather than consider the BSDE (5.1)
we consider the infinite horizon BSDE
(5.12)
dyt = − [λ max{l(Xt ), yt } + λ min{yt , u(Xt )} − (2λ + r)yt ] dt + Zt dWt , t ≥ 0,

The associated driver is f˜(t, y) = λ max(l(Xt ), y) + min(y, u(Xt )) − (2λ + r)y. Since
this is still Lipschitz the proofs of the corresponding results pass through unchanged
Zero-sum Dynkin games 21

2 2
and the BSDE has a unique solution in the spaces Sα,∞ × Hα,∞ .
Secondly, in the verification of the solution to the auxiliary games (which cor-
responds to Lemma 5.6 under the common constraint), we need to merge the two
Poisson sequences and this defines an increasing sequence (Tkmer )k≥1 (See [25, Sec-
tion 3] for details). As a result we define ŶTkmer = max{LTkmer , YTkmer }1Tkmer ∈T M AX +
min{UTkmer , YTkmer }1θk ∈T min , where Tkmer ∈ T M AX denotes the event that the sig-
nal Tkmer is the sup player’s opportunity. Mutatis mutandis the remainder of the
argument is the same.

REFERENCES

[1] Alvarez, Luis, Lempa, Jukka, Saarinen, Harto, and Sillanpaa, Wiljami. Solutions for Poisson
stopping problems of linear diffusions via extremal processes. Stochastic Processes and
their Applications, 2024. https://doi.org/10.1016/j.spa.2024.104351.
[2] Bismut, Jean-Michel. Sur un probleme de Dynkin. Zeitschrift für Wahrscheinlichkeitstheorie
und Verwandte Gebiete, 39(1):31–53, 1977.
[3] Borodin, Andrei N. and Salminen, Paavo. Handbook of Brownian motion—facts and formulae.
Probability and its Applications. Birkhäuser Verlag, Basel, Second edition, 2002.
[4] Cvitanic, Jaksa and Karatzas, Ioannis. Backward stochastic differential equations with reflec-
tion and Dynkin games. The Annals of Probability, 24(4):2024–2056, 1996.
[5] Darling, Richard WR and Pardoux, Etienne. Backwards SDE with random terminal time and
applications to semilinear elliptic PDE. The Annals of Probability, 25(3):1135–1159, 1997.
[6] De Angelis, Tiziano and Ekström, Erik. Playing with ghosts in a Dynkin game. Stochastic
Processes and their Applications, 130:6133–6156, 2020.
[7] De Angelis, Tiziano, Ekström, Erik, and Glover, Kristoffer. Dynkin games with incomplete and
asymmetric information. Mathematics of Operations Research, 47(1):560–586, 2022.
[8] De Angelis, Tiziano, Merkulov, Nikita, and Palczewski, Jan. On the value of non-Markovian
Dynkin games with partial and asymmetric information. The Annals of Applied Probability,
32(3):1774–1813, 2022.
[9] Dupuis, Paul and Wang, Hui. Optimal stopping with random intervention times. Advances in
Applied Probability, 34(1):141–157, 2002.
[10] Dynkin, E. B. A game-theoretic version of an optimal stopping problem. Dokl. Akad. Nauk
SSSR, 185:16–19, 1969.
[11] Ekström, Erik and Peskir, Goran. Optimal stopping games for Markov processes. SIAM Journal
on Control and Optimization, 47(2):684–702, 2008.
[12] Gapeev, Pavel V. Discounted nonzero-sum optimal stopping games under Poisson random
intervention times. Stochastics, 2024. https://doi.org/10.1080/17442508.2024.2360941.
[13] Gilbarg, David and Trudinger, Neil S. Elliptic partial differential equations of second order.
Classics in Mathematics. Springer-Verlag, Berlin, 2001.
[14] Guo, Ivan. On Dynkin games with unordered payoff processes. arXiv:2008.06882, 2020.
[15] Hobson, David. The shape of the value function under Poisson optimal stopping. Stochastic
Processes and their Applications, 133:229–246, 2021.
[16] Hobson, David and Zeng, Matthew. Constrained optimal stopping, liquidity and effort. Stochas-
tic Processes and their Applications, 150:819–843, 2022.
[17] Hobson, David, Liang, Gechun, and Wang, Edward. Callable convertible bonds under liquidity
constraints and hybrid priorities. SIAM Journal on Financial Mathematics, 2024. To
appear.
[18] Karatzas, Ioannis and Shreve, Steven. Brownian motion and stochastic calculus. Springer,
New York, 2014.
[19] Kifer, Yuri. Dynkin’s games and Israeli options. International Scholarly Research Notices,
2013.
[20] Laraki, Rida and Solan, Eilon. The value of zero-sum stopping games in continuous time. SIAM
Journal on Control and Optimization, 43(5):1913–1922, 2005.
[21] Lempa, Jukka. Optimal stopping with information constraint. Applied Mathematics & Opti-
mization, 66(2):147–173, 2012.
[22] Lempa, Jukka and Saarinen, Harto. A zero-sum Poisson stopping game with asymmetric signal
rates. Applied Mathematics & Optimization, 87(3):35, 2023.
[23] Liang, Gechun. Stochastic control representations for penalized backward stochastic differential
equations. SIAM Journal on Control and Optimization, 53(3):1440–1463, 2015.
22

[24] Liang, Gechun and Sun, Haodong. Dynkin games with Poisson random intervention times.
SIAM Journal on Control and Optimization, 57(4):2962–2991, 2019.
[25] Liang, Gechun and Sun, Haodong. Risk-sensitive Dynkin games with heterogeneous Poisson
random intervention times. arXiv:2008.01787, 2020.
[26] Merkulov, Nikita. Value and Nash equilibrium in games of optimal stopping. PhD thesis,
University of Leeds, 2021.
[27] Neveu, Jacques. Discrete-parameter martingales. North-Holland Mathematical Library, vol.
10, Revised edition. North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier
Publishing Co., Inc., New York, 1975.
[28] Pérez, José Luis, Rodosthenous, Neofytos, and Yamazaki, Kazutoshi. Non-zero-sum optimal
stopping game with continuous versus periodic observations. Mathematics of Operations
Research, 2024. https://doi.org/10.1287/moor.2023.0123.
[29] Rosenberg, Dinah, Solan, Eilon, and Vieille, Nicolas. Stopping games with randomized strate-
gies. Probability Theory and Related Fields, 119(3):433–451, 2001.
[30] Stettner, Lukasz. Zero-sum Markov games with stopping and impulsive strategies. Applied
Mathematics & Optimization, 9(1):1–24, 1982.
[31] Touzi, Nizar and Vieille, Nicolas. Continuous-time Dynkin games with mixed strategies. SIAM
Journal on Control and Optimization, 41(4):1073–1088, 2002.

Appendix A. Proofs.
[Proof of Lemma 3.7]
M AX min
We prove that (τ ∧ ηE , Xτ ∧ηEM AX ) and (τ ∧ ηE , Xτ ∧ηEmin ) have the same
distribution using coupling of marked Poisson processes. The proof of the second
result follows in an identical fashion.
We expand the probability space (Ω, F, F, P) if necessary so that it supports a
marked point process N taking values in R+ × Y with rate dt × µ(dy), and such that
X and N are independent. Here µ is a measure on Y .
If Y is a singleton Y = {y} then µ is a point mass µ = µY δy where µY =
µ({y}) ∈ (0, ∞) and we may identify events of the marked point process with events
of a standard Poisson process with rate µY .
We are interested in the case where Y is a two-point set, Y = {R, B} where
we will refer to a mark with label R (respectively B) as a red (respectively blue)
mark. Suppose µ is such that µ({R}) = µR and µ({B}) = µB . By the thinking of
the previous paragraph, we can identify the event times of the marked point process
where we consider all the marks as a standard Poisson process with rate µR + µB and
event times (TkµR +µB )k≥1 . If we only consider the Poisson process generated by the
red (respectively blue) marks then we have a Poisson process of rate µR (µB ) with
event times (TkµR )k≥1 ((TkµB )k≥1 ).
Using this coupling it is clear that for each ω ∈ Ω we have {TkµR (ω)}k≥1 ⊆
µR +µB
{Tk (ω)}k≥1 and {TkµR (ω)}k≥1 ∪ {TkµB (ω)}k≥1 = {TkµR +µB (ω)}k≥1 . Moreover,
the set on the left of the last equation is a disjoint union (almost surely).
Take µR = µB = λ. We use the red marks to denote the Poisson signals of the sup
player under the independent constraint, so that her stopping time τ can be defined
using red marks. Note that, given that Px (τ = ∞ or Xτ ∈ D) = 1, only those red
marks at which X is in D are used for the modelling of τ .
Consider the following two approaches:
(1) Thin out all the blue marks from the marked Poisson process, then find the
first event time of the remaining marked process such that X is in the set E; call this
time η (1) .
(2) Thin out all the blue marks at which the process X is in the set D, and all
the red marks at which X is not in D, then find the first event time of the remaining
Poisson process such that X is in the set E; call this time η (2) .
In either approach above we never thinned any red marks at which X is in D,
Zero-sum Dynkin games 23

and so the definition of τ is not affected. It is immediate that the event time τ ∧ η (1)
M AX
defined via the first approach has the same distribution as τ ∧ ηE , and the event
(2) min
time τ ∧ η defined via the second approach has the same distribution as τ ∧ ηE .
But the two approaches are equivalent in the sense that in each approach half of the
marks are thinned from a Poisson process with intensity 2λ, and the first arrival time
of E is defined using the remaining process up to time τ .
M AX min
Hence (τ ∧ ηE , Xτ ∧ηEM AX ) and (τ ∧ ηE , Xτ ∧ηEmin ) have the same distribution.
min
For the expected payoff, observe that we can write R(τ, ηE ) as

(l(Xτ ∧ηEmin )1Xτ ∧ηmin ∈D


min
min
R(τ, ηE ) = e−r(τ ∧ηE )
E

+u(Xτ ∧ηEmin )1Xτ ∧ηmin ∈E )1τ ∧ηEmin <∞ + M∞ 1τ ∧ηEmin =∞ ,


E

M AX min
and we can write R(τ, ηE ) in the exact same form on replacing τ ∧ ηE by τ ∧
M AX x min x M AX M AX
ηE . Therefore J (τ, ηE ) = J (τ, ηE ) holds given that (τ ∧ ηE , Xτ ∧ηEM AX )
min
and (τ ∧ ηE , Xτ ∧ηEmin ) have the same distribution, and that M∞ is independent of
the Poisson signals. □
[Proof of Lemma 3.10]
We prove the first equality. The second equality can be proved via the same
approach. Recall that A = AC and B = B C . Let θt be the shift operator on the
canonical space and let θτ (ω) = θτ (ω) (ω).
x x
By Corollary 3.8 we have v C (Xτx ) = J Xτ (ηA C C
, ηB ) = J Xτ (ηA
M AX min
, ηB ). Hence,
by Corollary 3.9 and the strong Markov property:
e−rτ v C (Xτx )1{τ <ηmin }∩{τ <γ M AX }
B τ,A

)1ηM AX ∧ηmin <∞ + M∞ 1ηM AX ∧ηmin =∞ ]1{τ <ηmin }∩{τ <γ M AX }


x
= e−rτ EXτ [R(ηA
M AX min
, ηB
A B A B B τ,A

= (e−rτ Ex [(R(ηA
M AX min
, ηB )1ηM AX ∧ηmin <∞ ) ◦ θτ |Fτ ]
A B

+Ex [M∞ 1ηM AX ∧ηmin =∞ |Fτ ])1{τ <ηmin }∩{τ <γ M AX }


A B B τ,A
M AX min
= e −rτ x
E [e −r(γτ,A ∧ηB −τ )
(l(Xγ M AX )1γ M AX <ηmin + u(Xηmin )1ηmin <γ M AX )1ηM AX ∧ηmin <∞ |Fτ ]
τ,A τ,A B B B τ,A A B

+Ex [M∞ 1ηM AX ∧ηmin =∞ )|Fτ ] 1{τ <ηmin }∩{τ <γ M AX }



A B B τ,A

= E x M AX
[R(γτ,A min
, ηB 1
)|Fτ ] {τ <ηmin }∩{τ <γ M AX } .
B τ,A


[Proof of Lemma 4.4]
Recall that the functions l and u are both bounded. Hence, for any initial value
2
x, l(X), u(X) satisfy Assumption 5.1 for any fixed α ∈ (− θ2 , 0), with L∞ = U∞ = 0.
By Remark 4.3 and the property that V is a strong solution of (4.3), (V (X), V ′ (X))
satisfies BSDE (5.1). By Lemma 4.2 the candidate value function V is bounded, hence
the asymptotic condition (5.2) holds. Therefore (V (X), V ′ (X)) is a solution to the
BSDE (5.1). By Theorem 5.8, V is the value of the game under the common Poisson
C C
constraint and (η[−1,1] , η[−x∗ ,−1)∪(1,x∗ ] ) is a saddle point. □

[Proof of Proposition 5.3]


We start by considering the following finite horizon BSDE on [0, k] for any k ≥ 0:
Z k Z k
(A.1) yt (k) = (λ max{l(Xs ), min(ys (k), u(Xs ))} − (λ + r)ys (k)) ds − Zs (k) dWs .
t t

The driver of BSDE (A.1) is Lipschitz continuous and of linear growth in y, indepen-
dent of z. Hence, by Darling and Pardoux [5, Proposition 2.3], there exists a unique
24

2 2
pair of solutions (y(k), Z(k)) ∈ S0,k × H0,k to the BSDE (A.1).

Next, we extend BSDE (A.1) from [0, k] to [0, ∞) by defining yt (k) = Zt (k) = 0
2 2
for t ≥ k. Hence we get two sequences (y(k))k≥0 ⊂ Sα,∞ and (Z(k))k≥0 ⊂ Hα,∞ . Our
next step is to show that these sequences are Cauchy sequences in the corresponding
weighted normed spaces.

Take k ′ ≥ k ≥ 0. It follows from Itô’s formula that we have:


e2αt (yt (k′ ) − yt (k))2
Z k′
= − 2ae2αs (ys (k′ ) − ys (k))2 ds
t
Z k′
+ 2e2αs (ys (k′ ) − ys (k))λ(max{l(Xs ), min(ys (k′ ), u(Xs ))} − max{l(Xs ), min(ys (k), u(Xs ))}) ds
t
Z k′ Z k′
− 2e2αs (λ + r)(ys (k′ ) − ys (k))2 ds + 2e2αs λl(Xs )(ys (k′ ) − ys (k)) ds
t k
Z k′ Z k′
− 2e2αs (ys (k′ ) − ys (k))(Zs (k′ ) − Zs (k)) dWs − e2αs (Zs (k′ ) − Zs (k))2 ds,
t t

a2
By an application of the inequality 2ab ≤ δ2 + δ 2 b2 , for any δ > 0 we have:
Z k′ Z k′ 
1

(A.2) 2e2αs λl(Xs )(ys (k′ ) − ys (k)) ds ≤ e2αs λ δ 2 l(Xs )2 + 2 (ys (k′ ) − ys (k))2 ds.
k k δ
1
Since α > −r we may choose δ > 0 such that α = 2δ 2 λ−r. Using (A.2) and Lipschitz
continuity of the driver, we obtain:
e2αt (yt (k′ ) − yt (k))2
Z k′   Z k′
1
≤ 2e2αs (ys (k) − ys (k′ ))2 −α + λ − (λ + r) + 2 λ ds + e2αs λδ 2 l(Xs )2 ds
t 2δ k
Z k′ Z k′
− 2e2αs (ys (k′ ) − ys (k))(Zs (k′ ) − Zs (k)) dWs − e2αs (Zs (k′ ) − Zs (k))2 ds,
t t

and hence,
Z k′
e2αt (yt (k′ ) − yt (k))2 + e2αs (Zs (k′ ) − Zs (k))2 ds
t
Z k′ Z k′
≤ e2αs λδ 2 l(Xs )2 ds − 2e2αs (ys (k′ ) − ys (k))(Zs (k′ ) − Zs (k)) dWs
k t
Z k′ Z t
= e2αs λδ 2 l(Xs )2 ds + 2e2αs (ys (k′ ) − ys (k))(Zs (k′ ) − Zs (k)) dWs
k 0
Z k′
(A.3) − 2e2αs (ys (k′ ) − ys (k))(Zs (k′ ) − Zs (k)) dWs .
0
Rt
It can be shown that ( 0 2e2αs (ys (k ′ )−ys (k))(Zs (k ′ )−Zs (k)) dWs )t≥0 is a uniformly
integrable martingale. Indeed, by the BDG inequality and the inequality 2ab ≤
a2
δ2
+ δ12 b2 , for any δ1 > 0,
1
" #
Z t
E sup 2e2αs (ys (k′ ) − ys (k))(Zs (k′ ) − Zs (k)) dWs
t≥0 0

k′
" !# " Z !#
C
≤ E sup e2αs (ys (k′ ) − ys (k))2 + Cδ12 E e2αs (Zs (k′ ) − Zs (k))2 ds
δ12 0≤s≤k′ 0

C
≤ ∥y(k′ ) − y(k)∥S 2 + Cδ12 ∥Z(k′ ) − Z(k)∥H2 < ∞.
δ12 α,k′ α,k′
Zero-sum Dynkin games 25

Taking expectations at t = 0 in (A.3), we get:


k′ k′
"Z # "Z #
∥Z(k′ ) − Z(k)∥H2 =E e2αs (Zs (k′ ) − Zs (k))2 ds ≤ λδ 2 E e2αs l(Xs )2 ds ↓ 0
α,∞
0 k

′ 2
as k, k → ∞. Hence (Z(k))k≥0 is a Cauchy sequence in the space Hα,∞ and converges
to some limit which we denote by Z = (Zt )t≥0 .
Similarly, taking expectations of the supremum over t in (A.3), we get:

E[sup e2αt (yt (k′ ) − yt (k))2 ]


t≥0
k′
"Z #
C
≤ E e2αs λδ 2 l(Xs )2 ds + ∥y(k′ ) − y(k)∥S 2 + Cδ12 ∥Z(k′ ) − Z(k)∥H2 .
k δ12 α,k′ α,k′

Recall that yt (k) = Zt (k) = 0 for t ≥ k, which implies that ∥y(k ′ ) − y(k)∥S 2 ′ =
α,k
∥y(k ′ )−y(k)∥Sα,∞
2 and ∥Z(k ′ )−Z(k)∥H2 ′ = ∥Z(k ′ )−Z(k)∥H2α,∞ . Hence, by choosing
α,k

δ1 such that δ12 > C, we have:


"Z ′ #

C
 k

1 − 2 ∥y(k ) − y(k)∥S 2 ≤E e2αs
λδ l(Xs ) ds + Cδ12 ∥Z(k′ ) − Z(k)∥H2 .
2 2
δ1 α,∞
k
α,∞

Hence, taking k, k ′ → ∞ and applying the result that (Z(k))k≥0 is a Cauchy sequence
2 2
in the space Hα,∞ , we have that (y(k))k≥0 is a Cauchy sequence in the space Sα,∞
and converges to some limit which we denote by y = (yt )t≥0 .
By considering limits of y(k) and Z(k) as k → ∞, it is standard to check that
(y, Z) satisfies BSDE (5.1). The process y is defined as a limit of a Cauchy sequence
2
in the space Sα,∞ ; therefore, it is immediate that lim E[e2αt yt2 ] = 0. The existence of
t↑∞
a solution to BSDE (5.1) thus follows.
For the uniqueness of the solution, let (y, Z) and (y′ , Z ′ ) be two solutions to
BSDE (5.1), and define ∆y = y − y′ and ∆Z = Z − Z ′ . Then, for 0 ≤ t ≤ T , ∆y
solves:
Z T Z T
f (s, ys ) − f (s, ys′ ) ds −

∆yt = ∆yT + ∆Zs dWs .
t t

Applying Itô’s formula to e2αt (∆yt )2 , a similar argument as the proof of existence
implies that ∥∆y∥Sα,∞2 = ∥∆Z∥H2α,∞ = 0, hence uniqueness follows. □
[Proof of Lemma 5.5]
2
The integrability of Y follows from the result that y ∈ Sα,∞ . Assumption 5.1
states integrability of L and U , therefore the integrability of Ŷ follows.
2
The processes L = (Lt )t≥0 and U = (Ut )t≥0 are in Hα+r,∞ under Assumption
R ∞ 2(α+r)t 2 R ∞ 2(α+r)t 2
5.1. This implies that 0 e Lt dt and 0 e Ut dt are a.s. finite. We know
R∞
that, if a nonnegative function f satisfies 0 f (x) dx < ∞ and lim f (x) exists, then
t↑∞
this limit must be zero. Hence, by Assumption 5.1, Lt , Ut → 0 a.s..
2
The process y is in the space Sα,∞ with α > −r, therefore it follows that Y =
2
(Yt )t≥0 ∈ Sα+r,∞ , which implies that Yt → 0 a.s.. Hence Ŷt = max{min{Ut , Yt }, Lt } →
0 a.s.. □

You might also like