0% found this document useful (0 votes)
25 views27 pages

Article 1

Uploaded by

Sidy Ly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views27 pages

Article 1

Uploaded by

Sidy Ly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Stochastic Optimization in Population

Dynamics: The Case of Multi-site


Fisheries

Sidy Ly and Diaraf Seck

Abstract In this paper we build and study stochastic models governing fisheries
activities. The evolution of population is described by stochastic differential equa-
tions derived from the proposed model. By using tools of dynamic programming
we also derive some Hamilton Jacobi Bellman equations that we will study in the
theoretical and numerical point of view.

Keywords Stochastic differential equations · Stochastic optimization · Dynamic


programming principle · Hamilton-Jacobi-Bellman equations · Numerical
simulations

Mathematics Subject Classification (2010) 49L20, 49L20, 65C30, 35R60,


65N06

1 Introduction

A stochastic differential equation appears as a generalization of differential equation


with a term of white noise. Stochastic differential equations are used to model
random trajectories, such as stock prices or movements of particles or species such
as fish subject to diffusion phenomena. They also treat theoretically and numerically
problems arising from partial differential equations. Their fields of application
are notably vast in physics, biology, financial mathematics but in dynamics of
populations. It is in this respect that we will make an application. The aim of this

S. Ly ()
University Cheikh Anta Diop of Dakar, FASEG LMDAN, Dakar, Senegal
e-mail: [email protected]
D. Seck
University Cheikh Anta Diop of Dakar, FASEG LMDAN, Dakar, Senegal
IRD, UMMISCO, Dakar, Senegal
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive license 119
to Springer Nature Switzerland AG 2020
D. Seck et al. (eds.), Nonlinear Analysis, Geometry and Applications,
Trends in Mathematics, https://doi.org/10.1007/978-3-030-57336-2_5
120 S. Ly and D. Seck

paper is to solve stochastic optimization problems related to fishery in the case of a


single site and also in the case of several sites. What we call here FADS is the sites
(Fish Aggregating Devices) that is to say very sophisticated objects that have the
power to attract fishes.
This paper is organized in three major parts consisting of modeling, theoretical
and numerical resolution of stochastic optimization problems.
Modeling is inspired by Allen [1] on the basis of two cases. In the first case,
we consider the sea as one site. As a result, we neglect certain internal movements.
Based up on other hypotheses, we model the fishery in two stochastic differential
equations; one governing the density of species and the other the fishing effort. In
the second case, we put L sites in the sea and assume that the boats can move from
one site to another as well as the fish. Modeling in this case leads us to a 2L + 1
system of stochastic differential equations which L + 1 represents those of fish and
L those of boats.
For stochastic optimization problems, for each of the two cases we maximize
a functional representing the profits under the constraints of stochastic differential
equations resulting from the modeling. Using dynamic principle programming leads
us to Hamilton-Jacobi-Bellman equation whose solution is given by Ladyzhenskaya
[2].
In numerical simulations, we use finite differences methods to solve partial
differential equation given by dynamic principle programming.

2 Modeling

2.1 Single Site Case

In this section, we want to model the fishery by considering two sites one for fishes
the other for fishermen (boats) see Fig. 1. Let x(t) and E(t) represent respectively
the size of fishes and fishing effort in the system at time t. It is assumed that in a
small time interval t, x can change into −1, 0 or 1 and E into −1, 0 or 1. Let’s
consider X = [x; E]T the change in a small time interval t. In this model,
catching by other predators is not taken account and we don’t look at growth of
fishermen’s population. We denote by b the per capita birth, d the per capita death,
c cost per unit of fishing effort, q the catchability in the site and p the price.
Under these assumptions, as illustrated in Fig. 1, there are four possible changes
for the two states in the time interval t not including the case where there is no
change in the time interval and neglecting multiple births, deaths or transformations
in time t which have probabilities of order (t)2 . The possible changes and there
probabilities are given in Fig. 2. Now we are interested in finding the mean change
E(x) and the covariance matrix E(x(x)T ) for the time interval t. Neglecting
Stochastic Optimization in Population Dynamics 121

Fig. 1 Modeling with two sites

Change Probability
T
X1 = [1; 0] p1 = bx t
2 T
X = [− 1; 0] p2 = dx t + qxE t
3 T
X = [0; 1] p3 = pqxE t
T
X4 = [0; − 1] p4 = cE t
4
T
X5 = [0; 0] p5 = 1 − pi
i=1

Fig. 2 Possible changes in the population of the fishes with corresponding probabilities

terms of order (t)2 , we have


⎡ ⎤

5 bx − dx − qxE
E(X) = pi Xi = ⎣ ⎦ t
i=1 pqxE − cE

and
⎡ ⎤

5 bx + dx + qxE 0
E(X(X)T ) = pi (Xi )(Xi )T = ⎣ ⎦ t.
i=1 0 pqxE + cE

We now define the expectation vector f and the 2 × 2 symmetric positive definite
covariance matrix G.

f(t, X) = E(X)/t and G(t, X) = E(X(X)T )/t.


122 S. Ly and D. Seck

Noticing that t is small and E((X)2 ) = O((t)2 ), the covariance matrix is set
equal to E(x(x)2)/t. Referring to [1] that leads us to the stochastic differential
equation system:
⎧ √

⎪ dx(t) = (bx − dx − qxE)dt + bx + dx + qxEdW1 (t)





⎪ √

⎨ dE(t) = (pqxE − cE)dt + pqxE + cEdW2 (t)
(2.1)



⎪ x(0) = x0






E(0) = E0

The stochastic differential equation model for the dynamics of there interacting
populations (2.1) can be rewritten as follow:

dX(t) = f(t, X)dt + G1/2(t, X)dW(t) (2.2)


⎡ ⎤
bx + dx + qxE 0
with X(0) = X0 , G(t, X) = ⎣ ⎦ and W(t) =
0 pqxE + cE
T
[W1 (t); W2 (t)] is the two-dimensional Brownian motions.

2.2 3 Sites Case

We put on the sea three sites. We suppose that fishes move randomly from the sea
to a given site i and vice versa. They can also move from a site i to a site j = i and
vice versa. We denote respectively by msi and mij the movements rate from the sea
to the site i and from site i to the site j .
We designate by xi (t) the size of the population at time t in the site i. We study
the variation of the population x during an interval of time t.
The time interval t is assumed to be sufficiently small that it can not have
several births and several deaths at the same time in different sites.
In this model, natural death and capture by other predators are not taken into
account. The only way for fish to die is fishing. We denote by bi the per capita birth
and qi the catchability in the site i (see Fig. 3).
Under these assumptions, there are 21 possibilities for a population change x
if we neglect multiple births, deaths or transformations in time t which have
probabilities of order (t)2 . These possibilities are listed in the Fig. 4 along with
their corresponding probabilities.
The first component of the matrix x i represents the transformation of the
population of fishes into the sea (off-site). The second, third and forth components
represent respectively the transformation made into the site 1, 2 and 3. For example
Stochastic Optimization in Population Dynamics 123

Fig. 3 Evolution of fishery in the case of 3

Fig. 4 Possible changes in the population of the fishes with the corresponding probabilities
124 S. Ly and D. Seck

x 20 = [−1, 1, 0, 0]T represents the movement of one individual from the


population in off-site xs to the population x1 during time interval t and the
probability of this event is proportional to the size of the population xs and the time
interval t that is, p20 = ms1 xs t. As a second example, x 10 = [0, 1, 0, 0]T
represents a birth in the population x1 with as probability p10 = b1 x1 t. As third
example, x 1 = [0, 0, 0, −1]T represents a death or the catching of one individual
in site 3 and the probability of this event is proportional to the size of the population
x3 , the fishing effort E3 and the time interval t that is, p1 = q3 x3 E3 t. It is
21
assumed that t > 0 is sufficiently small so that p21 > 0. Noticing that pi = 1.
i=1

21
Now we are interested in finding the mean change E(x) = pi x i and the
i=1
covariance matrix E(x(x)T ) for the time interval t. Neglecting terms of order
(t)2 , we have
⎡ ⎤
bs xs − ds xs + m1s x1 + m2s x2 + m3s x3 − (ms1 + ms2 + ms3 )xs
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ b1 x1 − d1 x1 − q1 x1 E1 + ms1 xs + m21 x2 + m31 x3 − (m1s + m12 + m13 )x1 ⎥
⎢ ⎥
⎢ ⎥
E(x) = ⎢ ⎥ t
⎢ ⎥
⎢ b2 x2 − d2 x2 − q2 x2 E2 + ms2 xs + m12 x1 + m32 x3 − (m2s + m21 + m23 )x2 ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
b3 x3 − d3 x3 − q3 x3 E3 + ms3 xs + m13 x1 + m23 x2 − (m3s + m31 + m32 )x3

and
⎡ ⎤
δs as1 as2 as3
⎢a δ1 a12 a13 ⎥
⎢ 1s ⎥
21 ⎢ ⎥
⎢ ⎥
E(x(x)T ) = pi x i (x i )T = ⎢ ⎥ t.
⎢ a2s a21 δ2 a23 ⎥
i=1 ⎢ ⎥
⎣ ⎦
a3s a31 a32 δ3

E(x(x)T ) is a symmetric positive definite matrix with:


δs = bs xs + ds xs + m1s x1 + m2s x2 + m3s x3 + (ms1 + ms2 + ms3 )xs ,
δ1 = b1 x1 + d1 x1 + q1 x1 E1 + ms1 xs + m21 x2 + m31 x3 + (m1s + m12 + m13 )x1 ,
δ2 = b2 x2 + d2 x2 + q2 x1 E2 + ms2 xs + m12 x1 + m32 x3 + (m2s + m21 + m23 )x2 ,
δ3 = b3 x3 + d3 x3 + q3 x3 E3 + ms3 xs + m13 x1 + m23 x2 + (m3s + m31 + m32 )x3 ,
as1 = a1s = −(ms1xs + m1s x1 ); as2 = a2s = −(ms2xs + m2s x2 );
as3 = a3s = −(ms3xs + m3s x3 ); a12 = a21 = −(m12 x1 + m21 x2 );
a13 = a31 = −(m13 x1 + m31 x3 ) and a23 = a32 = −(m23 x2 + m32 x3 ).
Stochastic Optimization in Population Dynamics 125

We now define the expectation vector f and the 4 × 4 symmetric positive definite
covariance matrix g, as

f(t, x1 , x2 , x3 ) = E(X)/t and g(t, x1 , x2 , x3 ) = E(X(X)T )/t.

Noticing that as t is small and E(X(X)T ) = O((t)2 ), the covariance matrix


is set equal to E(X(X)T )/t. Refering to [1] (section 5 page 135) that leads us
to the stochastic differential equation system:

⎨ dX(t) = fdt + g 1/2 dW
(2.3)

X(0) = X0

where W(t) = [Ws (t), W1 (t), W2 (t), W3 (t)]T .


Now we are going to do the same thing for the boats. As for the fish, we start
from a particular case of 3 sites and then generalize it to L sites. We assume that
these 3 sites are deposited in the sea in order to attract the fish to be caught.
To determine our model, we assume that we have only one boat that can move
from one site i to another j with symmetric movements rates βij that is to βij = βj i
for all i = j . Here we assume that we do not fish offsites and that on each site i
there is a cost per unit of fishing effort ci and a catchability qi . In our model, we
assume that we do not perform several costs at different sites at the same time and
that we do not capture at different sites at the same time. The cost to leave a site i to
go to another j is proportional to the distance between the two sites dij that means
that ci = φij dij = φj i dj i where φj i > 0.
Under these assumptions, there are also 13 changes for the variation of fishing
effort E during a sufficiently small time interval t. These possibilities are listed
in the Fig. 5 along with their corresponding probabilities.
The first component of the matrix E i represents the evolution of the fishery
in site 1. The second and third components represent respectively the evolution of
fishery made into the site 2 and 3. For example E 12 = [−1, 1, 0]T represents the
movement of the boat from site 1 to site 2 during time interval t and the probability
of this event is proportional to the fishing effort E1 and the time interval t that is to
say, p12 = β12 E1 t. As a second example, E 6 = [0, 1, 0]T represents a capture
in site 2 with the probability proportional to the size of the population x2 , fishing
effort E2 and time interval that is p6 = pq2 x2 E2 t where p represents the price
of species. As a third example, E 4 = [0, 0, −1]T represents a cost per unit of
fishing in site 3 and the probability of this event is proportional to the fishing effort
E3 and the time interval t that is to say, p4 = c3 E3 t. It is assumed that t > 0
21
is sufficiently small so that p13 > 0. Noticing that pi = 1.
i=1
126 S. Ly and D. Seck

Fig. 5 Possible changes in the population of the fishes with corresponding probabilities

Then neglecting terms of order (t)2 , the mean change E(E) and the
covariance matrix E(E(E)T ) for the time interval t are given by:
⎡ ⎤
β21 E2 + β31 E3 − (β12 + β13 )E1 + pq1 x1 E1 − c1 E1
⎢ ⎥

13 ⎢ ⎥
⎢ ⎥
E(E) = pi E = ⎢ β12 E1 + β32 E3 − (β21 + β23 )E2 + pq2 x2 E2 − c2 E2 ⎥ t
i
⎢ ⎥
i=1 ⎣ ⎦
β13 E1 + β23 E2 − (β31 + β32 )E3 + pq3 x3 E3 − c3 E3

and
⎡ ⎤
γ1 b12 b13
⎢ ⎥

13 ⎢ ⎥
⎢ ⎥
E(E(E) ) = T
pi E (E ) = ⎢ b21 γ2 b23 ⎥ t.
i i T
⎢ ⎥
i=1 ⎣ ⎦
b31 b32 γ3

E(y(y)T ) is a symmetric positive definite matrix with:


γ1 = β21 E2 + β31 E3 + (β12 + β13 )E1 + pq1 x1 E1 + c1 E1 ,
γ2 = β12 E1 + β32 E3 + (β21 + β23 )E2 + pq2 x2 E2 + c2 E2 ,
γ3 = β13 E1 + β23 E2 + (β31 + β32 )E3 + pq3 x3 E3 + c3 E3 ,
Stochastic Optimization in Population Dynamics 127

b12 = b21 = −(β12 E1 + β21 E2 ); b32 = b23 = −(β32E3 + β23 E2 );


b13 = b31 = −(β13 E1 + β31 E3 );
We now define the expectation vector f1 and the 3 ×3 symmetric positive definite
covariance matrix g1 , as follow

f1 (t, E1 , E2 , E3 ) = E(E)/t and g1 (t, E1 , E2 , E3 ) = E(E(E)T )/t.

Noticing that as t is small and E(E(E)T ) = O((t)2 ), the covariance matrix


is set equal to E(E(E)T )/t. Referring to [1] (section 5 page 135) that leads us
to the stochastic differential equation system:

⎪ 1/2
⎨ dY(t) = f1 dt + g1 dW1
(2.4)

⎩ Y(0) = Y
0

) *T
Where W1 (t) = W11 (t), W12 (t), W13 (t) .

2.3 L Sites Case

In this section, we want to model the fishery by considering the evolution of the
resource (fish) and boat movement between different sites.
For this, we put on the sea L sites where L is a positive integer (≥ 3). Here we
call sites F.A.D (Fish aggregating devices). They are some objects that have power
to attract fish. We suppose that fish move randomly from the sea to a given site i and
vice versa. They can also move from a site i to a site j = i and vice versa. In this
model, capture by other predators is not taken into account. We denote by bi the per
capita birth, di per capita death in the site i and qi the catchability in the site i.
We denote respectively by msi and mij movements rate from the sea to the site i
and movements rate from site i to the site j .
We designate by xi (t) the size of the population at time t in the site i and Ei (t)
the fishing effort.
To determine our model, we assume that we have only one boat that can move
from one site i to another j with symmetric movements rates βij that is to say
βij = βj i for all i = j . Here we assume that we do not fish outside the sites and
that on each site i there is a cost per unit of fishing effort ci and a catchability qi .
In our model, we assume that we do not charge several costs at different sites at the
same time and that we do not capture at different sites at the same time (see Fig. 6).
Under These assumptions, we have (L + 1)(L + 2) + 1 possibilities (L ≥ 2)
corresponding to the evolution of fishery. With the same arguments as case of single
site, we have stochastic model for a boat defined as follow:
128 S. Ly and D. Seck

Fig. 6 Pattern for L sites



⎪ dX(t) = fdt + g 1/2 dW






⎪ 1/2
⎨ dY(t) = f1 dt + g1 dW1
(2.5)



⎪ X(0) = X0






Y(0) = Y0

where vectors X, Y, W, W1 , f and f1 are defined by:


⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ 1⎞
xs E1 ws w1
⎜ x1 ⎟ ⎜ E2 ⎟ ⎜ w1 ⎟ ⎜ w2 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 1⎟
Xt = ⎜ . ⎟; Yt = ⎜ . ⎟; W = ⎜ . ⎟; W1 = ⎜ . ⎟;
⎝ .. ⎠ ⎝ .. ⎠ ⎝ .. ⎠ ⎝ .. ⎠
xL EL wL w1L
⎡ ⎤

L 
L
⎢ bs xs − ds xs + mis xi − msi xs ⎥
⎢ ⎥ ⎡ ⎤
⎢ i=1 i=1 ⎥
⎢ ⎥ fs
⎢ ⎥ ⎢ ⎥
⎢   ⎥ ⎢ ⎥
⎢ b1 x1 − d1 x1 − q1 n1 E1 + ms1 xs − m1s x1 + mi1 xi − m1i x1 ⎥ ⎢ ⎥
⎢ ⎥ ⎢ f1 ⎥
⎢ = = ⎥ ⎢ ⎥
f=⎢ i 1 i 1 ⎥=⎢ . ⎥
⎢ .. ⎥ ⎢ .. ⎥
⎢ ⎥ ⎢ ⎥
⎢ . ⎥ ⎢ ⎥
⎢ ⎥ ⎣ ⎦
⎢ ⎥
⎢ ⎥ fL
⎢ ⎥
⎣b x − d x − q n E + m x − m x +  m x −  m x ⎦
L−1 L−1
L L L L L L L sL s Ls L iL i Li L
i=1 i=1
Stochastic Optimization in Population Dynamics 129

and
⎡ ⎤

L 
L
⎢ βi1 Ei − β1i E1 + pq1 x1 E1 − c1 E1 ⎥ ⎡ ⎤
⎢ ⎥ f11
⎢ i=2 i=2 ⎥
⎢ ⎥ ⎢ ⎥
⎢   ⎥ ⎢ ⎥
⎢ ⎥ ⎢ 2⎥
⎢ βi2 Ei − β2i E2 + pq2 x2 E2 − c2 E2 ⎥ ⎢ f1 ⎥
f1 = ⎢
⎢ i=2 i=2
⎥=⎢ . ⎥
⎥ ⎢ . ⎥
⎢ ⎥ ⎢ . ⎥
⎢ .. ⎥ ⎢ ⎥
⎢ . ⎥ ⎣ ⎦
⎢ L−1 ⎥
⎢ 
L−1 ⎥ f1L
⎣ ⎦
βiL Ei − βLi EL + pqL xL EL − cL EL
i=1 i=1

The symmetric positive definite matrices g and g1 are given by:


⎡ ⎤
δs as1 . . . asL
⎢ a1s δ1 . . . a1L ⎥
⎢ ⎥
g=⎢ . . . . ⎥
⎣ .. .. . . .. ⎦
aLs aL1 . . . δL

where


L 
L
δs = b s x s + d s x s + mis xi + msi xs
i=1 i=1
 
δi = bi xi −d1i xi +qi ni Ei +msi xs +mis xi + mj i xj + mij xi for i, j = 1, . . . , L
j =i j =i

aij = −(mij xi + mj i xj ) ∀i = j

and
⎡ ⎤
γ1 b12 . . . b1L
⎢ b21 γ2 . . . b2L ⎥
⎢ ⎥
g1 = ⎢ . . . . ⎥
⎣ .. .. . . .. ⎦
bL1 bL1 . . . γL

where
 
γi = βj i Ej + βij Ei + pqi xi Ei + ci Ei for i, j = 1, . . . , L.
j =i j =i
130 S. Ly and D. Seck

and

bij = −(βij Ei + βj i Ej ) ∀i = j

3 Stochastic Optimization

3.1 Position of the Problem

In this section, we want to maximize the profits defined as the difference between
the total capture in the different sites and the costs related to the fishing activity. The
functional governing these profits is defined as the average value of all benefits over
a time interval [0; T ] T > 0 and it is given by
⎡ ⎛ ⎞ ⎤
 T L 
L 
J [Z(t), E (.)] = E ⎣ ⎝ pqi xi Ei − cj Ei ⎠ dt ⎦ .
0 i=1 i=1 j =i

where vector Z(t) = Zt = (Xt ; Et ), (X(t); E(t)) = (Xt ; Et ) and cj = φij dij
represents the cost to leave a site i to go to another j and it is proportional to the
distance between the two sites dij . In the development of this functional, we do not
take into account the costs incurred by the boats to leave the beach and go to the
first fishing site as well as the return costs of the boats to the beach after fishing.
Then the stochastic optimization problem is to maximize J [Z(t), E (.)] under the
constraints:


⎪ dXt = fdt + g 1/2 dW






⎪ 1/2
⎨ dEt = f1 dt + g1 dW1
(3.1)



⎪ X(0) = X0






E(0) = E0

with Z(t) = Zt = (Xt ; Et ), (3.1) can be rewittren as follow:



⎨ dZt = Fdt + Gdζ

Z(0) = (X0 , E0 )

     1 
f W g2 0
where: F = ;ζ = and G is matrix G = 1 .
f1 W1 0 g12
Stochastic Optimization in Population Dynamics 131

This leads us to the following stochastic optimization problem that we want


maximize the functional:
⎡ ⎛ ⎞ ⎤
 T  L 
L 
J [Zt , E (.)] = E ⎣ ⎝ pqi xi Ei − cj Ei ⎠ dt ⎦ (3.2)
0 i=1 i=1 j =i

under the counstraints:



⎨ dZt = Fdt + Gdζ
(3.3)

Z(0) = (X0 , E0 )

3.2 Dynamic Programming Principle

To solve the problem defined by (3.2) and (3.3), let U (Z, t), known as the value
function, be the excepted value of the objective function (3.2) from t to T when an
optimal policy is followed from t to T , given Zt = z:
⎡ ⎛ ⎞ ⎤
 T L 
L 
U (z, t) = max E⎣ ⎝ pqi xi Ei − cj Ei ⎠ ds ⎦ . (3.4)
p(.)∈[pmin ;pmax ] t i=1 i=1 j =i

This functional gives the remaining optimal cost by assuming that we arrive at Zt in
time t < T . The final condition imposed by function U is:

U (z, T ) = 0 (3.5)

Then, by the principle of optimality,


⎡ ⎛ ⎞ ⎤
 t+dt L 
L 
U (z, t) = max E⎣ ⎝ pqi xi Ei − cj Ei ⎠ ds + U (z + dZt , t + dt)⎦ .
p(.)∈[pmin ;pmax ] t i=1 i=1 j =i
(3.6)

By Taylor’s expansion, we have:

∂U ∂U ∂U
U (z + dZt , t + dt) = U (z, t) + dXt + dEt + dt
∂X ∂E ∂t 
1
+ Uxx (dXt )2 + UEE (dEt )2 + Ut t (dt)2 + o(dt)2
2
(3.7)
132 S. Ly and D. Seck

We have:
⎛ ⎞ ⎛ ⎞
dXt fdt + g 1/2 dW
dZt = ⎝ ⎠=⎜



1/2
dEt f1 dt + g1 dW1

and

(dXt )2 = f2 (dt)2 + 2fg 1/2dWdt + g(dW)2

1/2
(dEt )2 = f1 2 (dt)2 + 2f1 g1 dW1 )dt + g1 (dW1 )2

By using the Brownian motion properties: E(dWt ) = 0 and E(dWt )2 = dt then


we have:
⎛ ⎞
fdt
E(dZt ) = ⎝ ⎠ (3.8)
f1 dt

and

E(dXt )2 = f2 (dt)2 + gdt E(dEt )2 = f1 2 (dt)2 + g1 dt. (3.9)

Computing the expectation and Substituting (4.9) and (3.9) into (3.7), we have:
∂U 1
E(U (z + dZt , t + dt)) = U (z, t) + (Uz .F)dt + dt + T rUzz .G2 dt. (3.10)
∂t 2
Substitute (3.10) into (3.4), we obtain:
⎡ ⎛ ⎞ ⎤

t +dt
L  L 
⎢ ⎝ pqi xi Ei − cj Ei ⎠ ds + U (z, t) ⎥
⎢ t ⎥
U (z, t) = max ⎢ i=1 i=1 j =i ⎥
p(.)∈[pmin ;pmax ] ⎣ ∂U 1 ⎦
+(Uz .F)dt + dt + T rUzz .G dt 2
∂t 2
We assume that dt is sufficiently small that we have:
⎡⎛ ⎞ ⎤
L L 
⎢⎝ pqi xi Ei − cj Ei ⎠ dt ⎥
⎢ ⎥
U (z, t) = max ⎢ i=1 i=1 j=i ⎥
p(.)∈[pmin ;pmax ] ⎣ ∂U 1 ⎦
+U (z, t) + (Uz .F)dt + dt + T rUzz .G dt + o(dt)
2 2
∂t 2
(3.11)

Note that we have suppressed the arguments of the functions involved in (3.11).
Stochastic Optimization in Population Dynamics 133

Cancelling the term U on both sides of (3.11), dividing the remainder by dt, and
letting dt −→ 0, we obtain the Hamilton-Jacobi-Bellman (HJB) equation:
⎡ ⎤
∂U L 
L 
1
+ max ⎣ pqi xi Ei − cj Ei + Uz .F + T rUzz .G2 ⎦ = 0
∂t p(.)∈[pmin ;pmax ] 2
i=1 i=1 j =i
(3.12)

for the value function U (z, t) with the boundary condition

U (z, T ) = 0 (3.13)

⎧ ⎡ ⎤

⎪ ∂U L L 
1

⎨ ∂t + p(.)∈[pmax;p ] ⎣
⎪ pqi xi Ei − cj Ei + Uz .F + T rUzz .G ⎦ = 0
2
min max 2
i=1 i=1 j =i





U (z, T ) = 0
(3.14)

Noticing that terms in square brackets are continuous in p so the maximum


always exists in [pmin ; pmax ]. We denote this maximum by p. So that our problem
becomes:
⎧   

⎪ ∂U 1
L  L

⎪ + + 2
= −
⎨ ∂t U z .F
2
T rU zz .G c E
j i pqi xi Ei
i=1 j =i i=1 (3.15)





U (z, T ) = 0

For simplify calculations, we put index s = 0, δs = a00 and for i = 1 . . . L, δi = aii


and γi = bii then


L
∂U  i ∂U
L
Uz .F = fi + f1
∂xi ∂Ei
i=0 i=0

and


L
∂ 2U 
L
∂ 2U
T rUzz .G2 = aij + bij .
∂xi ∂xj ∂Ei ∂Ej
i,j =0 i,j =1
134 S. Ly and D. Seck

So Eq. (3.15) becomes:


⎧ ⎛ ⎞

⎪  L L  L 2U  L 2U

⎪ ∂U ∂U ∂U 1 ∂ ∂

⎪ + fi + f1i + ⎝ aij + bij ⎠
⎪ ∂t
⎪ ∂xi ∂Ei 2 ∂xi ∂xj ∂Ei ∂Ej

⎪ i=0 i=0 i,j =0 i,j =1

L  L
⎪ = cj E i − pqi xi Ei



⎪ i=1 j =i i=1






U (z, T ) = 0
(3.16)

By using variables changing s = T − t, the system (3.16) can be rewrite as follow


⎧ ⎛ ⎞

⎪  L L  L 2V  L 2V

⎪ ∂V ∂V ∂V 1 ∂ ∂

⎪ − + fi + f1i + ⎝ aij + bij ⎠
⎪ ∂t
⎪ ∂xi ∂Ei 2 ∂xi ∂xj ∂Ei ∂Ej

⎪ i=0 i=0 i,j =0 i,j =1

L   L
⎪ = cj E i − pqi xi Ei



⎪ i=1 j =i i=1






V (z, 0) = 0
(3.17)

where V (z, s) = U (z, T − s).


Before to give the solution of problem (3.17), let us recall for a nonintegral
positive number l that C l;l/2 () is Banach space of functions u(x; t) that are
continuous in  with all derivatives of the form Dtr Dxs for 2r + s < l and have
finite norm:
|l|

(l) (l) (j )
|u| =< u > + < u >
j =0

where
(0) (j )
 (0)
< u > = max|u|; < u > = |Dtr Dxs (u)|

2r+s=j

< u >(l) (l) (l)


 =< u >x, + < u >t,

(l)
 (l=|l|)
< u >x, = < Dtr Dxs (u) >x,
2r+s=|l|

(l/2)
 ( l−2r−s )
< u >x, = < Dtr Dxs (u) >t, 2
0<l−2r−s<2
Stochastic Optimization in Population Dynamics 135

Theorem 3.1 Let  = [a; +∞)2L+1 ×[0; T ] with a > 0 and 0 < l. All coefficients
belong to the class C l;l/2() then problem (3.17) has a unique solution from the
class C 2+l;1+l/2(). It satisfies the inequality:
 (l)
 L 
  L

|V |(l+2) 
< k cj E i − pqi xi Ei  (3.18)

 i=1 j =i i=1 


where k is a constant not depending on h.


Proof Coefficients are C ∞ () and holderian exponent 1. So that by using theo-
rem (5.1) page(320) in [2], we show that problem(3.17) has a unique solution from
the class C 2+l;1+l/2(). It satisfies the inequality (3.18).

4 Stochastic Optimization and Numerical Simulations


in the Case of Single Site

This section is devoted to stochastic optimization and numerical simulations in the


case of single site whose modeling is done in Sect. 1.

4.1 Stochastic Optimization

Before solving stochastic optimization problem, let us recall stochastic differential


equations system obtained in Sect. 1
⎧ √

⎪ dx(t) = (bx − dx − qxE)dt + bx + dx + qxEdW1 (t)





⎪ √

⎨ dE(t) = (pqxE − cE)dt + pqxE + cEdW2 (t)
(4.1)



⎪ x(0) = x0






E(0) = E0

The stochastic differential equation model for the dynamics of there interacting
populations (4.1) can be rewritten as form:

dX(t) = b(t, X)dt + G(t, X)dW(t) (4.2)


136 S. Ly and D. Seck

⎡ ⎤
bx + dx + qxE 0
with X(0) = X0 , G(t, X) = ⎣ ⎦ and W(t) =
0 pqxE + cE
[W1 (t); W2 (t)]T is the two-dimensional Brownian motions.
Now we show that this system (4.1) has a unique solution.
For this, let  = [xmin ; xmax ]×[Emin ; Emax ] and A = [pmin ; pmax ] where xmin ,
xmax , Emin , Emax , pmin and pmax are positive real numbers such that xmin < xmax ,
Emin < Emax and pmin < pmax .
Theorem 4.1 Assume that X0 is independent of the future of the Brownian motion
beyond time t = 0 and for any t ∈ [0; +∞), p ∈ A and for X ∈  a progressively
measurable process such that, for any T > 0
 T 
E |Xs |2 ds < +∞
t

then system (4.1) has a unique solution.


   
x x1
Proof Let X = ,Y = in  and p ∈ A. To prove the theorem, we
E E1
show that: |b(t, X, p) − b(t, Y, p)| + |G(t, X, p) − G(t, Y, p)| ≤ K |X − Y| for
t ∈ [0; +∞).
We have
|b(t, X, p) − b(t, Y, p)| = max (|(b − d)(x − x1 ) − q(xE − x1 E1 )|
; |pq(xE − x1 E1 ) − c(E − E1 )|)

≤ K1 |X − Y|
where K1 = max(|b − d| + qEmax+ qxmax ; pqEmax + cxmax ) 
|G(t, X, p) − G(t, Y, p)| = max √bx + dx + qxE − √bx1 + dx1 + px1 E1  ;
√ √ 
 pqxE + cE − pqx1 E+ cE1 

≤ K2 |X − Y| .
 
b + d + q(xmax + Emax ) pq(xmax + Emax + c)
where K2 = max √ ; √ .
2 (b + d + qEmax )xmax 2 (pqxmax + c)Emax
When we choose K = max(K1 ; K2 ) that end the proof.
In Fig. 7, we plot solution of system (4.1). blue and red curves respectively
represent biomass and fishing in the deterministic and stochastic cases with initial
conditions x(0) = 10 and E(0) = 5.
We notice that when we increase fishing effort to a maximal value, biomass
considerably decreases that carries away extinction of species. So when species
become rare, fishing effort also decreases and goes to zero.
Stochastic Optimization in Population Dynamics 137

18
16
14
12
E(t)
10
8
6
4
2 x(t)
0
0 10 20 30 40 50 60 70 80 90 100

Fig. 7 Representation of the solution of system (4.1). We plot the solution for T = 100, b = 0.05,
d = 0.02, q = 0.01, p = 1, and c = 0.02

4.1.1 Position of the Problem

In this section, we want to maximize the profits defined as the difference between
the total capture in the different sites and the costs related to the fishing activity. The
functional governing these profits is defined as the average value of all benefits over
a time interval [0; T ] T > 0 and it is given by
 T 
J [xt , E (.)] = E (pqxE − cE) dt . (4.3)
0

where positive constants p and c are respectively price and cost per unit of fishing
effort. Then the stochastic optimization problem is to maximize J [xt , E (.)] under
the constraints
⎧ √
⎨ dx(t) = f (t, x)dt + g(t, x)dW
(4.4)

x(0) = x0

where f (t, x) = bx(t)−dx(t)−qx(t)E(t), g(t, x) = bx(t) + dx(t) + qx(t)E(t)
and W is Brownian motion.

4.1.2 Dynamic Programming Principle

To solve the problem defined by (4.3) and (4.4), let U (x, t), known as the value
function, be the excepted value of the objective function (4.3) from t to T
 T 
U (x, t) = max E (pxE − cE) ds . (4.5)
E(.)∈[Emin ;Emax ] t
138 S. Ly and D. Seck

This functional gives the remaining optimal cost by assuming that we arrive at xt in
time t < T . The final condition imposed by function U is:

U (x, T ) = 0 (4.6)

Then, by the principle of optimality,


 t +dt 
U (x, t) = max E (pqxE − cE) ds + U (x + dx t , t + dt) .
E(.)∈[Emin ;Emax ] t
(4.7)

By Taylor’s expansion, we have:

∂U ∂U 1 
U (x + dxt , t + dt) = U (x, t) + dxt + dt + Uxx (dxt )2 + Utt (dt)2 + o(dt)2
∂x ∂t 2
(4.8)

We have:

dxt = f dt + g 1/2 dW and (dxt )2 = f 2 (dt)2 + 2fg 1/2 dW dt + g(dW )2 .

By using the Brownian motion properties: E(dWt ) = 0 and E(dWt )2 = dt then we


have:

E(dxt ) = f dt (4.9)

and

E(dxt )2 = f 2 (dt)2 + gdt. (4.10)

Computing the expectation and Substituting (4.9) and (4.10) into (4.8), we have:

∂U 1
E(U (x + dxt , t + dt)) = U (x, t) + (Ux f )dt + dt + Uxx gdt + terms in (dt)2 .
∂t 2
(4.11)

Substitute (4.11) into (4.5), we obtain:


⎡ t +dt ⎤
⎢ (pqxE − cE) ds + U (x, t) + (Ux f )dt + ⎥
U (x, t) = max ⎣ t ⎦
E(.)∈[Emin ;Emax ] ∂U 1
dt + Uxx .gdt + terms in (dt)2
∂t 2
Stochastic Optimization in Population Dynamics 139

We assume that dt is sufficiently small that we have:


⎡ ⎤
(pqxE − cE) dt + U (x, t) + (Ux f )dt +
U (x, t) = max ⎣ ∂U 1 ⎦
E(.)∈[Emin ;Emax ] dt + Uxx gdt + o(dt)2
∂t 2
(4.12)

Note that we have suppressed the arguments of the functions involved in (4.12).
Cancelling the term U on both sides of (4.12), dividing the remainder by dt, and
letting dt −→ 0, we obtain the Hamilton-Jacobi-Bellman (HJB) equation:
 
∂U 1
+ max pqxE − cE + Ux f + Uxx .g = 0 (4.13)
∂t E(.)∈[Emin ;Emax ] 2

for the value function U (x, t) with the boundary condition

U (x, T ) = 0. (4.14)

Substituting f and g by their value in (4.13), we have:


 
∂U ∂U 1 ∂2U
+ max pqxE − cE + (bx − dx − qxE) + (bx + dx + qxE) 2 = 0
∂t E(.)∈[Emin ;Emax ] ∂x 2 ∂x
(4.15)

Noticing that terms in square brackets are continuous in E so the maximum always
exists in [Emin ; Emax ] where Emin and Emax are positive real values that verify
Emin < Emax . We notice this maximum by E. So that our problem becomes
⎧ 2

⎪ ∂U + 1 (bx + dx + qxE) ∂ U + (bx − dx − qxE) ∂U = cE − pqxE

∂t 2 ∂x 2 ∂x



U (x, T ) = 0
(4.16)

4.1.3 Existence and Uniqueness

By using variables changing s = T −t, the system (4.16) can be rewritten as follow:

⎪ ∂V 1 ∂ 2V ∂V

⎨ (x; s) − (bx + dx + qxE) 2 (x; s) − (bx − dx − qxE) (x; s) = pqxE − cE
∂s 2 ∂x ∂x



V (x, 0) = 0
(4.17)

where V (x; s) = U (x; T − s).


140 S. Ly and D. Seck

1
0.8
0.6
0.4
0.2
0
-0.2
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 5
1 0.5 3 4
1 2
0 0

(a)

1.2
1
0.8
0.6
0.4
0.2
0
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 5
1 0 3 4
5
0.5 1 2
0 0

(b)
Fig. 8 Representation of solution of (4.16) for b = 0.01, d = 0.02, q = 0.1, p = 0.08, c = 0.5,
T = 10, a = 5, E = 0.2 (case (a)) and E = 0.4 (case (b))

Let functions f ; g and h defined on [a; +∞) × [0; T ] by: f (x; t) = bx − dx −


qxE; g(x; t) = bx + dx + qxE and h(x; t) = pqxE − cE.
System (4.17) has a unique solution because it is a particular case of the
system (3.17) and the solution is assured by theorem 3.1 and represented in Figs. 8
and 9.

4.2 Numerical Simulations

In this section, we do numerical simulations of the solution (4.17) by using finite


difference methods and assuming that b ≥ d + qE. For this, we must solve our
partial differential equation in [0; a] × [0; T ] where a > 0 and T > 0. We
respectively subdivide interval [0; a] and [0; T ] in N and in K equal intervals with
a T
respective steps x and s where x = and s = .
N +1 K +1
Stochastic Optimization in Population Dynamics 141

30
20
10
0
-10
-20
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 4 5
1 0.5 2 3
0 0 1

(c)

1500
1000
500
0
-500
-1000
5 4.5
4 3.5 10
3 2.5 8 9
6 7
2 1.5 4 5
1 0.5 2 3
0 0 1

(d)
Fig. 9 Representation of solution of (4.16) for b = 0.01, d = 0.02, q = 0.1, p = 0.08, c = 0.5,
T = 10, a = 5, E = 0.7 (case (c)) and E = 1 (case (d))

So for i ∈ {0, . . . , N + 1} and j ∈ {0, . . . , K + 1}, we have xi = ix and


j
sj = j s. We designate V (xi ; sj ) by Vi for all i ∈ {0, . . . , N + 1} and j ∈
{0, . . . , K + 1}.
With initial condition at t = 0 of the problem, we have V (xi , 0) = Vi0 = 0 for
i ∈ {0, . . . , N + 1}. So we have (N + 2)(K + 2) equations for (N + 2)(K + 2)
unknown.
By discretizing, we have for all j ∈ {0, . . . , K}:
for i = 0
j j j
∂ 2V V − 2V1 + V0
2
(x; s) ≈ 2 (4.18)
∂x (x)2
j j
∂V V − V0
(x; s) ≈ 1 (4.19)
∂x (x)
j +1 j
∂V V − V0
(x; s) ≈ 0 . (4.20)
∂s (s)
142 S. Ly and D. Seck

for i ∈ {1, . . . , N}
j j j
∂ 2V V − 2Vi + Vi−1
2
(x; s) ≈ i+1 (4.21)
∂x (x)2
j j
∂V V − Vi
(x; s) ≈ i+1 (4.22)
∂x (x)
j +1 j
∂V V − Vi
(x; s) ≈ i . (4.23)
∂s (s)

for i = N + 1
j j j
∂ 2V V − 2VN + VN+1
2
(x; s) ≈ N−1 (4.24)
∂x (x)2
j j
∂V V − VN+1
(x; s) ≈ N (4.25)
∂x (x)
j +1 j
∂V V − VN+1
(x; s) ≈ N+1 . (4.26)
∂s (s)

Substituting equations from (4.18) to (4.26) in (4.17), we have for all j ∈


{0, . . . , K}:
for i = 0
j +1 j
" j j j
# " j j
#
V0 − V0 V2 − 2V1 + V0 V1 − V0
+ A(x0 ) + B(x0 ) = C(x0 )
(s) (x)2 (x)
(4.27)

for i ∈ {1, . . . , N}

j +1 j
" j j j # " j j#
Vi − Vi Vi+1 − 2Vi + Vi−1 Vi+1 − Vi
+ A(xi ) + B(xi ) = C(xi )
(s) (x)2 (x)
(4.28)

for i = N + 1
j +1 j
" j j j
# " j j
#
VN+1 − VN+1 VN−1 − 2VN + VN+1 VN − VN+1
+ A(xN+1 ) + B(xN+1 ) = C(xN+1 )
(s) (x)2 (x)
(4.29)
Stochastic Optimization in Population Dynamics 143

where for i ∈ {0, . . . , N + 1}

1
A(xi ) = − (bxi + dxi + qExi )
2
B(xi ) = −(bxi − dxi − qExi )

C(xi ) = pqExi − cE.

With simple calculations, Eqs. (4.27), (4.28) and (4.29) respectively become:

j +1 j
V0 − V0 j j j
+ α0 V2 + β0 V1 + γ0 V0 = C(x0 ) (4.30)
(s)
j +1 j
Vi − Vi j j j
+ αi Vi+1 + βi Vi + γi Vi−1 = C(xi ) (4.31)
(s)
j +1 j
VN+1 − VN j j j
+ αN+1 VN−1 + βN+1 VN+1 + γN+1 VN = C(xN+1 ) (4.32)
(s)

where
A(x0 ) −2A(x0) B(x0 ) A(x0 ) B(x0 )
α0 = ; β0 = + ; γ0 = −
(x)2 (x)2 x (x)2 x
 
A(xi ) B(xi ) 2A(xi ) B(xi ) A(xi )
αi = + ; βi = − + ; γi =
(x)2 x (x)2 x (x)2

and
A(xN +1 ) A(xN +1 ) B(xN +1 ) −2A(xN +1 ) B(xN +1 )
αN +1 = ; βN +1 = − ; γN +1 = +
(x)2 (x)2 x (x)2 x

When we introduce vectorial notations for all j ∈ {0, . . . , K + 1}:


⎛ j

V0
⎜ . ⎟
⎜ . ⎟
⎜ . ⎟
V (j ) = ⎜ . ⎟.
⎜ . ⎟
⎝ . ⎠
j
VN+1
144 S. Ly and D. Seck

scheme (4.30), (4.31) and (4.32) can be under the following form:
⎧ j +1
⎨V = (I − sM) × V j + sC
(4.33)

j ∈ {0, . . . , K + 1}

where I is identity matrix


⎡ ⎤
γ0 β0 α0 0 . . . . . . . . . ... ... 0
⎢ γ1 β1 α1 0 0 . . . . . . ... ... 0 ⎥
⎢ ⎥
⎢0 ⎥
⎢ γ2 β2 α2 0 0 . . . ... ... 0 ⎥
⎢ ⎥
⎢0 .. ⎥
⎢ 0 γ β α 0
3 3 3 . ... ... 0 ⎥
⎢ . ⎥
⎢ . .. .. .. .. .. .. .. .. ⎥
⎢ . . . . . . . . . ⎥
M=⎢ . ⎥
⎢ . .. .. .. .. .. .. .. .. ⎥
⎢ . . . . . . . . . ⎥
⎢ . ⎥
⎢ . .. .. .. .. .. .. ⎥
⎢ . . . . . . . 0 ⎥
⎢ ⎥
⎢0 . . . . . . . . . 0 0 γN−1 βN−1 αN−1 0 ⎥
⎢ ⎥
⎣0 ... ... ... ... 0 0 γN βN αN ⎦
0 ... ... ... ... ... 0 αN+1 γN+1 βN+1
,
⎡ ⎤
C(x0 )
⎢ ⎥
⎢ ⎥
⎢ C(x ) ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
C=⎢ ⎥.
⎢ ⎥
⎢ ⎥
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
⎢ C(xN ) ⎥
⎢ ⎥
⎣ ⎦
C(xN+1 )

and
Proposition 4.2 If norm of I −tM is less than 1 then scheme (4.33) is convergent
for the norm .∞ . Where


n
A∞ = max |aij |.
1≤i≤n
j =1

with matrix A = (aij )1≤i,j ≤n


Stochastic Optimization in Population Dynamics 145

Proof Assume that norm of I − tM is less than 1, scheme (4.33) is equivalent to:
for i = 0,
j +1 j j j
V0 = (1 − sγ0 )V0 − sβ0 V1 − sα0 V2 + sC(x0 ) .
When we pass at norm, we have:
j +1 j j j
|V0 | ≤ |(1 − sγ0 )||V0 | + | − sβ0 ||V1 | + | − sα0 ||V2 | + s|C(x0 )| .
With our assumption, we obtain:

j +1 j j j
|V0 | ≤ |(1 − sγ0 )||V0 | + | − sβ0 ||V1 | + | − sα0 ||V2 | + s|C(x0 )|
j
≤ (|1 − sγ0 | + |sβ0 | + |sα0 |)V0 ∞ + sC∞
≤ V ∞ + sC∞
j
(4.34)
With same arguments, we show that for i = {1, . . . , N}

j +1
|Vi | ≤ V j ∞ + sC∞ (4.35)

and for i = N + 1, we have:


j +1
|VN+1 | ≤ V j ∞ + sC∞ . (4.36)

Equations (4.34), (4.35) and (4.36) show that for i = {0, . . . , N + 1} and j =
{0, . . . , M}

V j +1 ∞ ≤ V j ∞ + sC∞ . (4.37)

An obvious recurrence gives us:

V j ∞ ≤ V 0 ∞ + j sC∞ ≤ V 0 ∞ + T C∞ (4.38)

for all j = {0, . . . , M + 1} which shows that the scheme is stable and ends the
proof.

References

1. E. Allen, Modeling with Ito Stochastic Differential Equations (Springer, Dordrecht, 2007)
2. O.A. Ladyzhenskaya, V.A. Solonikov, N.N. Uralceva, Linear and Quasi linear Equations of
Parabolic Type (AMS, Providence, 1968)

You might also like