Article 1
Article 1
Abstract In this paper we build and study stochastic models governing fisheries
activities. The evolution of population is described by stochastic differential equa-
tions derived from the proposed model. By using tools of dynamic programming
we also derive some Hamilton Jacobi Bellman equations that we will study in the
theoretical and numerical point of view.
1 Introduction
S. Ly ()
University Cheikh Anta Diop of Dakar, FASEG LMDAN, Dakar, Senegal
e-mail: [email protected]
D. Seck
University Cheikh Anta Diop of Dakar, FASEG LMDAN, Dakar, Senegal
IRD, UMMISCO, Dakar, Senegal
e-mail: [email protected]
© The Editor(s) (if applicable) and The Author(s), under exclusive license 119
to Springer Nature Switzerland AG 2020
D. Seck et al. (eds.), Nonlinear Analysis, Geometry and Applications,
Trends in Mathematics, https://doi.org/10.1007/978-3-030-57336-2_5
120 S. Ly and D. Seck
2 Modeling
In this section, we want to model the fishery by considering two sites one for fishes
the other for fishermen (boats) see Fig. 1. Let x(t) and E(t) represent respectively
the size of fishes and fishing effort in the system at time t. It is assumed that in a
small time interval t, x can change into −1, 0 or 1 and E into −1, 0 or 1. Let’s
consider X = [x; E]T the change in a small time interval t. In this model,
catching by other predators is not taken account and we don’t look at growth of
fishermen’s population. We denote by b the per capita birth, d the per capita death,
c cost per unit of fishing effort, q the catchability in the site and p the price.
Under these assumptions, as illustrated in Fig. 1, there are four possible changes
for the two states in the time interval t not including the case where there is no
change in the time interval and neglecting multiple births, deaths or transformations
in time t which have probabilities of order (t)2 . The possible changes and there
probabilities are given in Fig. 2. Now we are interested in finding the mean change
E(x) and the covariance matrix E(x(x)T ) for the time interval t. Neglecting
Stochastic Optimization in Population Dynamics 121
Change Probability
T
X1 = [1; 0] p1 = bx t
2 T
X = [− 1; 0] p2 = dx t + qxE t
3 T
X = [0; 1] p3 = pqxE t
T
X4 = [0; − 1] p4 = cE t
4
T
X5 = [0; 0] p5 = 1 − pi
i=1
Fig. 2 Possible changes in the population of the fishes with corresponding probabilities
and
⎡ ⎤
5 bx + dx + qxE 0
E(X(X)T ) = pi (Xi )(Xi )T = ⎣ ⎦ t.
i=1 0 pqxE + cE
We now define the expectation vector f and the 2 × 2 symmetric positive definite
covariance matrix G.
Noticing that t is small and E((X)2 ) = O((t)2 ), the covariance matrix is set
equal to E(x(x)2)/t. Referring to [1] that leads us to the stochastic differential
equation system:
⎧ √
⎪
⎪ dx(t) = (bx − dx − qxE)dt + bx + dx + qxEdW1 (t)
⎪
⎪
⎪
⎪
⎪
⎪ √
⎪
⎨ dE(t) = (pqxE − cE)dt + pqxE + cEdW2 (t)
(2.1)
⎪
⎪
⎪
⎪ x(0) = x0
⎪
⎪
⎪
⎪
⎪
⎩
E(0) = E0
The stochastic differential equation model for the dynamics of there interacting
populations (2.1) can be rewritten as follow:
We put on the sea three sites. We suppose that fishes move randomly from the sea
to a given site i and vice versa. They can also move from a site i to a site j = i and
vice versa. We denote respectively by msi and mij the movements rate from the sea
to the site i and from site i to the site j .
We designate by xi (t) the size of the population at time t in the site i. We study
the variation of the population x during an interval of time t.
The time interval t is assumed to be sufficiently small that it can not have
several births and several deaths at the same time in different sites.
In this model, natural death and capture by other predators are not taken into
account. The only way for fish to die is fishing. We denote by bi the per capita birth
and qi the catchability in the site i (see Fig. 3).
Under these assumptions, there are 21 possibilities for a population change x
if we neglect multiple births, deaths or transformations in time t which have
probabilities of order (t)2 . These possibilities are listed in the Fig. 4 along with
their corresponding probabilities.
The first component of the matrix x i represents the transformation of the
population of fishes into the sea (off-site). The second, third and forth components
represent respectively the transformation made into the site 1, 2 and 3. For example
Stochastic Optimization in Population Dynamics 123
Fig. 4 Possible changes in the population of the fishes with the corresponding probabilities
124 S. Ly and D. Seck
and
⎡ ⎤
δs as1 as2 as3
⎢a δ1 a12 a13 ⎥
⎢ 1s ⎥
21 ⎢ ⎥
⎢ ⎥
E(x(x)T ) = pi x i (x i )T = ⎢ ⎥ t.
⎢ a2s a21 δ2 a23 ⎥
i=1 ⎢ ⎥
⎣ ⎦
a3s a31 a32 δ3
We now define the expectation vector f and the 4 × 4 symmetric positive definite
covariance matrix g, as
Fig. 5 Possible changes in the population of the fishes with corresponding probabilities
Then neglecting terms of order (t)2 , the mean change E(E) and the
covariance matrix E(E(E)T ) for the time interval t are given by:
⎡ ⎤
β21 E2 + β31 E3 − (β12 + β13 )E1 + pq1 x1 E1 − c1 E1
⎢ ⎥
13 ⎢ ⎥
⎢ ⎥
E(E) = pi E = ⎢ β12 E1 + β32 E3 − (β21 + β23 )E2 + pq2 x2 E2 − c2 E2 ⎥ t
i
⎢ ⎥
i=1 ⎣ ⎦
β13 E1 + β23 E2 − (β31 + β32 )E3 + pq3 x3 E3 − c3 E3
and
⎡ ⎤
γ1 b12 b13
⎢ ⎥
13 ⎢ ⎥
⎢ ⎥
E(E(E) ) = T
pi E (E ) = ⎢ b21 γ2 b23 ⎥ t.
i i T
⎢ ⎥
i=1 ⎣ ⎦
b31 b32 γ3
) *T
Where W1 (t) = W11 (t), W12 (t), W13 (t) .
In this section, we want to model the fishery by considering the evolution of the
resource (fish) and boat movement between different sites.
For this, we put on the sea L sites where L is a positive integer (≥ 3). Here we
call sites F.A.D (Fish aggregating devices). They are some objects that have power
to attract fish. We suppose that fish move randomly from the sea to a given site i and
vice versa. They can also move from a site i to a site j = i and vice versa. In this
model, capture by other predators is not taken into account. We denote by bi the per
capita birth, di per capita death in the site i and qi the catchability in the site i.
We denote respectively by msi and mij movements rate from the sea to the site i
and movements rate from site i to the site j .
We designate by xi (t) the size of the population at time t in the site i and Ei (t)
the fishing effort.
To determine our model, we assume that we have only one boat that can move
from one site i to another j with symmetric movements rates βij that is to say
βij = βj i for all i = j . Here we assume that we do not fish outside the sites and
that on each site i there is a cost per unit of fishing effort ci and a catchability qi .
In our model, we assume that we do not charge several costs at different sites at the
same time and that we do not capture at different sites at the same time (see Fig. 6).
Under These assumptions, we have (L + 1)(L + 2) + 1 possibilities (L ≥ 2)
corresponding to the evolution of fishery. With the same arguments as case of single
site, we have stochastic model for a boat defined as follow:
128 S. Ly and D. Seck
⎧
⎪
⎪ dX(t) = fdt + g 1/2 dW
⎪
⎪
⎪
⎪
⎪
⎪
⎪ 1/2
⎨ dY(t) = f1 dt + g1 dW1
(2.5)
⎪
⎪
⎪
⎪ X(0) = X0
⎪
⎪
⎪
⎪
⎪
⎩
Y(0) = Y0
and
⎡ ⎤
L
L
⎢ βi1 Ei − β1i E1 + pq1 x1 E1 − c1 E1 ⎥ ⎡ ⎤
⎢ ⎥ f11
⎢ i=2 i=2 ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ 2⎥
⎢ βi2 Ei − β2i E2 + pq2 x2 E2 − c2 E2 ⎥ ⎢ f1 ⎥
f1 = ⎢
⎢ i=2 i=2
⎥=⎢ . ⎥
⎥ ⎢ . ⎥
⎢ ⎥ ⎢ . ⎥
⎢ .. ⎥ ⎢ ⎥
⎢ . ⎥ ⎣ ⎦
⎢ L−1 ⎥
⎢
L−1 ⎥ f1L
⎣ ⎦
βiL Ei − βLi EL + pqL xL EL − cL EL
i=1 i=1
where
L
L
δs = b s x s + d s x s + mis xi + msi xs
i=1 i=1
δi = bi xi −d1i xi +qi ni Ei +msi xs +mis xi + mj i xj + mij xi for i, j = 1, . . . , L
j =i j =i
aij = −(mij xi + mj i xj ) ∀i = j
and
⎡ ⎤
γ1 b12 . . . b1L
⎢ b21 γ2 . . . b2L ⎥
⎢ ⎥
g1 = ⎢ . . . . ⎥
⎣ .. .. . . .. ⎦
bL1 bL1 . . . γL
where
γi = βj i Ej + βij Ei + pqi xi Ei + ci Ei for i, j = 1, . . . , L.
j =i j =i
130 S. Ly and D. Seck
and
bij = −(βij Ei + βj i Ej ) ∀i = j
3 Stochastic Optimization
In this section, we want to maximize the profits defined as the difference between
the total capture in the different sites and the costs related to the fishing activity. The
functional governing these profits is defined as the average value of all benefits over
a time interval [0; T ] T > 0 and it is given by
⎡ ⎛ ⎞ ⎤
T L
L
J [Z(t), E (.)] = E ⎣ ⎝ pqi xi Ei − cj Ei ⎠ dt ⎦ .
0 i=1 i=1 j =i
where vector Z(t) = Zt = (Xt ; Et ), (X(t); E(t)) = (Xt ; Et ) and cj = φij dij
represents the cost to leave a site i to go to another j and it is proportional to the
distance between the two sites dij . In the development of this functional, we do not
take into account the costs incurred by the boats to leave the beach and go to the
first fishing site as well as the return costs of the boats to the beach after fishing.
Then the stochastic optimization problem is to maximize J [Z(t), E (.)] under the
constraints:
⎧
⎪
⎪ dXt = fdt + g 1/2 dW
⎪
⎪
⎪
⎪
⎪
⎪
⎪ 1/2
⎨ dEt = f1 dt + g1 dW1
(3.1)
⎪
⎪
⎪
⎪ X(0) = X0
⎪
⎪
⎪
⎪
⎪
⎩
E(0) = E0
1
f W g2 0
where: F = ;ζ = and G is matrix G = 1 .
f1 W1 0 g12
Stochastic Optimization in Population Dynamics 131
To solve the problem defined by (3.2) and (3.3), let U (Z, t), known as the value
function, be the excepted value of the objective function (3.2) from t to T when an
optimal policy is followed from t to T , given Zt = z:
⎡ ⎛ ⎞ ⎤
T L
L
U (z, t) = max E⎣ ⎝ pqi xi Ei − cj Ei ⎠ ds ⎦ . (3.4)
p(.)∈[pmin ;pmax ] t i=1 i=1 j =i
This functional gives the remaining optimal cost by assuming that we arrive at Zt in
time t < T . The final condition imposed by function U is:
U (z, T ) = 0 (3.5)
∂U ∂U ∂U
U (z + dZt , t + dt) = U (z, t) + dXt + dEt + dt
∂X ∂E ∂t
1
+ Uxx (dXt )2 + UEE (dEt )2 + Ut t (dt)2 + o(dt)2
2
(3.7)
132 S. Ly and D. Seck
We have:
⎛ ⎞ ⎛ ⎞
dXt fdt + g 1/2 dW
dZt = ⎝ ⎠=⎜
⎝
⎟
⎠
1/2
dEt f1 dt + g1 dW1
and
1/2
(dEt )2 = f1 2 (dt)2 + 2f1 g1 dW1 )dt + g1 (dW1 )2
and
Computing the expectation and Substituting (4.9) and (3.9) into (3.7), we have:
∂U 1
E(U (z + dZt , t + dt)) = U (z, t) + (Uz .F)dt + dt + T rUzz .G2 dt. (3.10)
∂t 2
Substitute (3.10) into (3.4), we obtain:
⎡ ⎛ ⎞ ⎤
t +dt
L L
⎢ ⎝ pqi xi Ei − cj Ei ⎠ ds + U (z, t) ⎥
⎢ t ⎥
U (z, t) = max ⎢ i=1 i=1 j =i ⎥
p(.)∈[pmin ;pmax ] ⎣ ∂U 1 ⎦
+(Uz .F)dt + dt + T rUzz .G dt 2
∂t 2
We assume that dt is sufficiently small that we have:
⎡⎛ ⎞ ⎤
L L
⎢⎝ pqi xi Ei − cj Ei ⎠ dt ⎥
⎢ ⎥
U (z, t) = max ⎢ i=1 i=1 j=i ⎥
p(.)∈[pmin ;pmax ] ⎣ ∂U 1 ⎦
+U (z, t) + (Uz .F)dt + dt + T rUzz .G dt + o(dt)
2 2
∂t 2
(3.11)
Note that we have suppressed the arguments of the functions involved in (3.11).
Stochastic Optimization in Population Dynamics 133
Cancelling the term U on both sides of (3.11), dividing the remainder by dt, and
letting dt −→ 0, we obtain the Hamilton-Jacobi-Bellman (HJB) equation:
⎡ ⎤
∂U L
L
1
+ max ⎣ pqi xi Ei − cj Ei + Uz .F + T rUzz .G2 ⎦ = 0
∂t p(.)∈[pmin ;pmax ] 2
i=1 i=1 j =i
(3.12)
U (z, T ) = 0 (3.13)
⎧ ⎡ ⎤
⎪
⎪ ∂U L L
1
⎪
⎨ ∂t + p(.)∈[pmax;p ] ⎣
⎪ pqi xi Ei − cj Ei + Uz .F + T rUzz .G ⎦ = 0
2
min max 2
i=1 i=1 j =i
⎪
⎪
⎪
⎪
⎩
U (z, T ) = 0
(3.14)
L
∂U i ∂U
L
Uz .F = fi + f1
∂xi ∂Ei
i=0 i=0
and
L
∂ 2U
L
∂ 2U
T rUzz .G2 = aij + bij .
∂xi ∂xj ∂Ei ∂Ej
i,j =0 i,j =1
134 S. Ly and D. Seck
where
(0) (j )
(0)
< u > = max|u|; < u > = |Dtr Dxs (u)|
2r+s=j
(l)
(l=|l|)
< u >x, = < Dtr Dxs (u) >x,
2r+s=|l|
(l/2)
( l−2r−s )
< u >x, = < Dtr Dxs (u) >t, 2
0<l−2r−s<2
Stochastic Optimization in Population Dynamics 135
Theorem 3.1 Let = [a; +∞)2L+1 ×[0; T ] with a > 0 and 0 < l. All coefficients
belong to the class C l;l/2() then problem (3.17) has a unique solution from the
class C 2+l;1+l/2(). It satisfies the inequality:
(l)
L
L
|V |(l+2)
< k cj E i − pqi xi Ei (3.18)
i=1 j =i i=1
The stochastic differential equation model for the dynamics of there interacting
populations (4.1) can be rewritten as form:
⎡ ⎤
bx + dx + qxE 0
with X(0) = X0 , G(t, X) = ⎣ ⎦ and W(t) =
0 pqxE + cE
[W1 (t); W2 (t)]T is the two-dimensional Brownian motions.
Now we show that this system (4.1) has a unique solution.
For this, let = [xmin ; xmax ]×[Emin ; Emax ] and A = [pmin ; pmax ] where xmin ,
xmax , Emin , Emax , pmin and pmax are positive real numbers such that xmin < xmax ,
Emin < Emax and pmin < pmax .
Theorem 4.1 Assume that X0 is independent of the future of the Brownian motion
beyond time t = 0 and for any t ∈ [0; +∞), p ∈ A and for X ∈ a progressively
measurable process such that, for any T > 0
T
E |Xs |2 ds < +∞
t
≤ K1 |X − Y|
where K1 = max(|b − d| + qEmax+ qxmax ; pqEmax + cxmax )
|G(t, X, p) − G(t, Y, p)| = max √bx + dx + qxE − √bx1 + dx1 + px1 E1 ;
√ √
pqxE + cE − pqx1 E+ cE1
≤ K2 |X − Y| .
b + d + q(xmax + Emax ) pq(xmax + Emax + c)
where K2 = max √ ; √ .
2 (b + d + qEmax )xmax 2 (pqxmax + c)Emax
When we choose K = max(K1 ; K2 ) that end the proof.
In Fig. 7, we plot solution of system (4.1). blue and red curves respectively
represent biomass and fishing in the deterministic and stochastic cases with initial
conditions x(0) = 10 and E(0) = 5.
We notice that when we increase fishing effort to a maximal value, biomass
considerably decreases that carries away extinction of species. So when species
become rare, fishing effort also decreases and goes to zero.
Stochastic Optimization in Population Dynamics 137
18
16
14
12
E(t)
10
8
6
4
2 x(t)
0
0 10 20 30 40 50 60 70 80 90 100
Fig. 7 Representation of the solution of system (4.1). We plot the solution for T = 100, b = 0.05,
d = 0.02, q = 0.01, p = 1, and c = 0.02
In this section, we want to maximize the profits defined as the difference between
the total capture in the different sites and the costs related to the fishing activity. The
functional governing these profits is defined as the average value of all benefits over
a time interval [0; T ] T > 0 and it is given by
T
J [xt , E (.)] = E (pqxE − cE) dt . (4.3)
0
where positive constants p and c are respectively price and cost per unit of fishing
effort. Then the stochastic optimization problem is to maximize J [xt , E (.)] under
the constraints
⎧ √
⎨ dx(t) = f (t, x)dt + g(t, x)dW
(4.4)
⎩
x(0) = x0
√
where f (t, x) = bx(t)−dx(t)−qx(t)E(t), g(t, x) = bx(t) + dx(t) + qx(t)E(t)
and W is Brownian motion.
To solve the problem defined by (4.3) and (4.4), let U (x, t), known as the value
function, be the excepted value of the objective function (4.3) from t to T
T
U (x, t) = max E (pxE − cE) ds . (4.5)
E(.)∈[Emin ;Emax ] t
138 S. Ly and D. Seck
This functional gives the remaining optimal cost by assuming that we arrive at xt in
time t < T . The final condition imposed by function U is:
U (x, T ) = 0 (4.6)
∂U ∂U 1
U (x + dxt , t + dt) = U (x, t) + dxt + dt + Uxx (dxt )2 + Utt (dt)2 + o(dt)2
∂x ∂t 2
(4.8)
We have:
E(dxt ) = f dt (4.9)
and
Computing the expectation and Substituting (4.9) and (4.10) into (4.8), we have:
∂U 1
E(U (x + dxt , t + dt)) = U (x, t) + (Ux f )dt + dt + Uxx gdt + terms in (dt)2 .
∂t 2
(4.11)
Note that we have suppressed the arguments of the functions involved in (4.12).
Cancelling the term U on both sides of (4.12), dividing the remainder by dt, and
letting dt −→ 0, we obtain the Hamilton-Jacobi-Bellman (HJB) equation:
∂U 1
+ max pqxE − cE + Ux f + Uxx .g = 0 (4.13)
∂t E(.)∈[Emin ;Emax ] 2
U (x, T ) = 0. (4.14)
Noticing that terms in square brackets are continuous in E so the maximum always
exists in [Emin ; Emax ] where Emin and Emax are positive real values that verify
Emin < Emax . We notice this maximum by E. So that our problem becomes
⎧ 2
⎪
⎪ ∂U + 1 (bx + dx + qxE) ∂ U + (bx − dx − qxE) ∂U = cE − pqxE
⎨
∂t 2 ∂x 2 ∂x
⎪
⎪
⎩
U (x, T ) = 0
(4.16)
By using variables changing s = T −t, the system (4.16) can be rewritten as follow:
⎧
⎪ ∂V 1 ∂ 2V ∂V
⎪
⎨ (x; s) − (bx + dx + qxE) 2 (x; s) − (bx − dx − qxE) (x; s) = pqxE − cE
∂s 2 ∂x ∂x
⎪
⎪
⎩
V (x, 0) = 0
(4.17)
1
0.8
0.6
0.4
0.2
0
-0.2
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 5
1 0.5 3 4
1 2
0 0
(a)
1.2
1
0.8
0.6
0.4
0.2
0
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 5
1 0 3 4
5
0.5 1 2
0 0
(b)
Fig. 8 Representation of solution of (4.16) for b = 0.01, d = 0.02, q = 0.1, p = 0.08, c = 0.5,
T = 10, a = 5, E = 0.2 (case (a)) and E = 0.4 (case (b))
30
20
10
0
-10
-20
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 4 5
1 0.5 2 3
0 0 1
(c)
1500
1000
500
0
-500
-1000
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 4 5
1 0.5 2 3
0 0 1
(d)
Fig. 9 Representation of solution of (4.16) for b = 0.01, d = 0.02, q = 0.1, p = 0.08, c = 0.5,
T = 10, a = 5, E = 0.7 (case (c)) and E = 1 (case (d))
for i ∈ {1, . . . , N}
j j j
∂ 2V V − 2Vi + Vi−1
2
(x; s) ≈ i+1 (4.21)
∂x (x)2
j j
∂V V − Vi
(x; s) ≈ i+1 (4.22)
∂x (x)
j +1 j
∂V V − Vi
(x; s) ≈ i . (4.23)
∂s (s)
for i = N + 1
j j j
∂ 2V V − 2VN + VN+1
2
(x; s) ≈ N−1 (4.24)
∂x (x)2
j j
∂V V − VN+1
(x; s) ≈ N (4.25)
∂x (x)
j +1 j
∂V V − VN+1
(x; s) ≈ N+1 . (4.26)
∂s (s)
for i ∈ {1, . . . , N}
j +1 j
" j j j # " j j#
Vi − Vi Vi+1 − 2Vi + Vi−1 Vi+1 − Vi
+ A(xi ) + B(xi ) = C(xi )
(s) (x)2 (x)
(4.28)
for i = N + 1
j +1 j
" j j j
# " j j
#
VN+1 − VN+1 VN−1 − 2VN + VN+1 VN − VN+1
+ A(xN+1 ) + B(xN+1 ) = C(xN+1 )
(s) (x)2 (x)
(4.29)
Stochastic Optimization in Population Dynamics 143
1
A(xi ) = − (bxi + dxi + qExi )
2
B(xi ) = −(bxi − dxi − qExi )
With simple calculations, Eqs. (4.27), (4.28) and (4.29) respectively become:
j +1 j
V0 − V0 j j j
+ α0 V2 + β0 V1 + γ0 V0 = C(x0 ) (4.30)
(s)
j +1 j
Vi − Vi j j j
+ αi Vi+1 + βi Vi + γi Vi−1 = C(xi ) (4.31)
(s)
j +1 j
VN+1 − VN j j j
+ αN+1 VN−1 + βN+1 VN+1 + γN+1 VN = C(xN+1 ) (4.32)
(s)
where
A(x0 ) −2A(x0) B(x0 ) A(x0 ) B(x0 )
α0 = ; β0 = + ; γ0 = −
(x)2 (x)2 x (x)2 x
A(xi ) B(xi ) 2A(xi ) B(xi ) A(xi )
αi = + ; βi = − + ; γi =
(x)2 x (x)2 x (x)2
and
A(xN +1 ) A(xN +1 ) B(xN +1 ) −2A(xN +1 ) B(xN +1 )
αN +1 = ; βN +1 = − ; γN +1 = +
(x)2 (x)2 x (x)2 x
scheme (4.30), (4.31) and (4.32) can be under the following form:
⎧ j +1
⎨V = (I − sM) × V j + sC
(4.33)
⎩
j ∈ {0, . . . , K + 1}
and
Proposition 4.2 If norm of I −tM is less than 1 then scheme (4.33) is convergent
for the norm .∞ . Where
n
A∞ = max |aij |.
1≤i≤n
j =1
Proof Assume that norm of I − tM is less than 1, scheme (4.33) is equivalent to:
for i = 0,
j +1 j j j
V0 = (1 − sγ0 )V0 − sβ0 V1 − sα0 V2 + sC(x0 ) .
When we pass at norm, we have:
j +1 j j j
|V0 | ≤ |(1 − sγ0 )||V0 | + | − sβ0 ||V1 | + | − sα0 ||V2 | + s|C(x0 )| .
With our assumption, we obtain:
j +1 j j j
|V0 | ≤ |(1 − sγ0 )||V0 | + | − sβ0 ||V1 | + | − sα0 ||V2 | + s|C(x0 )|
j
≤ (|1 − sγ0 | + |sβ0 | + |sα0 |)V0 ∞ + sC∞
≤ V ∞ + sC∞
j
(4.34)
With same arguments, we show that for i = {1, . . . , N}
j +1
|Vi | ≤ V j ∞ + sC∞ (4.35)
Equations (4.34), (4.35) and (4.36) show that for i = {0, . . . , N + 1} and j =
{0, . . . , M}
V j +1 ∞ ≤ V j ∞ + sC∞ . (4.37)
for all j = {0, . . . , M + 1} which shows that the scheme is stable and ends the
proof.
References
1. E. Allen, Modeling with Ito Stochastic Differential Equations (Springer, Dordrecht, 2007)
2. O.A. Ladyzhenskaya, V.A. Solonikov, N.N. Uralceva, Linear and Quasi linear Equations of
Parabolic Type (AMS, Providence, 1968)