biorthogonal_system
biorthogonal_system
com
ScienceDirect
Journal of Differential Equations 336 (2022) 654–707
www.elsevier.com/locate/jde
Abstract
In this paper, we are interested in the minimal null control time of one-dimensional first-order linear
hyperbolic systems by one-sided boundary controls. Our main result is an explicit characterization of the
smallest and largest values that this minimal null control time can take with respect to the internal coupling
matrix. In particular, we obtain a complete description of the situations where the minimal null control
time is invariant with respect to all the possible choices of internal coupling matrices. The proof relies on
the notion of equivalent systems, in particular the backstepping method, a canonical LU -decomposition
for boundary coupling matrices and a compactness-uniqueness method adapted to the null controllability
property.
© 2022 Elsevier Inc. All rights reserved.
Keywords: Hyperbolic systems; Minimal null control time; Equivalent systems; Backstepping method;
LU -decomposition; Compactness-uniqueness method
* Corresponding author.
E-mail addresses: [email protected] (L. Hu), [email protected], [email protected] (G. Olive).
https://doi.org/10.1016/j.jde.2022.07.023
0022-0396/© 2022 Elsevier Inc. All rights reserved.
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
In this article we are interested in the null controllability properties of the following class of
one-dimensional first-order linear hyperbolic systems, which appears for instance in linearized
Saint-Venant equations and many other physical models of balance laws (see e.g. [2, Chapter 1]
and many references therein):
⎧
⎪ ∂y ∂y
⎪
⎪ (t, x) + (x) (t, x) = M(x)y(t, x),
⎨ ∂t ∂x
y− (t, 1) = u(t), y+ (t, 0) = Qy− (t, 0), (1)
⎪
⎪
⎪
⎩
y(0, x) = y 0 (x).
In (1), t > 0, x ∈ (0, 1), y(t, ·) is the state at time t, y 0 is the initial data and u(t) is the
control at time t. We denote by n ≥ 2 the total number of equations of the system. The matrix
∈ C 0,1 ([0, 1])n×n is assumed to be diagonal:
= diag(λ1 , . . . , λn ), (2)
λ1 (x) < · · · < λm (x) < 0 < λm+1 (x) < · · · < λm+p (x), ∀x ∈ [0, 1]. (3)
Finally, the matrix M ∈ L∞ (0, 1)n×n couples the equations of the system inside the domain and
the constant matrix Q ∈ Rp×m couples the equations of the system on the boundary x = 0.
All along this paper, for a vector (or vector-valued function) v ∈ Rn and a matrix (or matrix-
valued function) A ∈ Rn×n , we use the notation
v A−− A−+
v= − , A= ,
v+ A+− A++
where v− ∈ Rm , v+ ∈ Rp and A−− ∈ Rm×m , A−+ ∈ Rm×p , A+− ∈ Rp×m , A++ ∈ Rp×p .
We recall that the system (1) is well posed in (0, T ) for every T > 0: for every y 0 ∈ L2 (0, 1)n
and u ∈ L2 (0, T )m , there exists a unique solution
to the system (1). By solution we mean “solution along the characteristics”, this will be detailed
in Section 2 below.
The regularity C 0 ([0, T ]; L2 (0, 1)n ) of the solution allows us to consider control problems in
the space L2 (0, 1)n :
Definition 1.1. Let T > 0. We say that the system (1) is:
• exactly controllable in time T if, for every y 0 , y 1 ∈ L2 (0, 1)n , there exists u ∈ L2 (0, T )m
such that the corresponding solution y to the system (1) in (0, T ) satisfies y(T , ·) = y 1 .
655
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Clearly, exact controllability implies null controllability, but the converse is not true in general.
These notions also depend on the time T and, since controllability in time T1 implies con-
trollability in time T2 for every T2 ≥ T1 , it is natural to try to find the smallest possible control
time, the so-called “minimal control time”. This problem was recently completely solved in [17]
for the notion of exact controllability and we will investigate here what happens for the null
controllability.
Definition 1.2. For any , M and Q as above, we denote by Tinf (, M, Q) ∈ [0, +∞] the min-
imal null control time of the system (1), that is
Tinf (, M, Q) = inf {T > 0 | the system (1) is null controllable in time T } .
The time Tinf (, M, Q) is named “minimal” null control time according to the current litera-
ture, despite it is not always a minimal element of the set. We keep this naming here, but we use
the notation with the “inf” to avoid eventual confusions. The time Tinf (, M, Q) ∈ [0, +∞] is
thus the unique time that satisfies the following two properties:
• If T > Tinf (, M, Q), then the system (1) is null controllable in time T .
• If T < Tinf (, M, Q), then the system (1) is not null controllable in time T .
For the rest of this article it is important to keep in mind that the assumption (3) implies the
following order relation among Ti ():
T1 () ≤ · · · ≤ Tm (),
(5)
Tm+p () ≤ · · · ≤ Tm+1 ().
An important feature of the present article is that no assumptions will be required on the
boundary coupling matrices Q. To be able to handle such a general case and state our main result
we introduce a notion of canonical form.
Definition 1.3. We say that a matrix Q0 ∈ Rp×m is in canonical form if either Q0 = 0 or there
exist an integer ρ ≥ 1, row indices r1 , . . . , rρ ∈ {1, . . . , p} with r1 < · · · < rρ and distinct column
indices c1 , . . . , cρ ∈ {1, . . . , m} such that
656
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
for Q01 : (r1 , c1 ) = (1, 2), (r2 , c2 ) = (2, 3), (r3 , c3 ) = (4, 1),
for Q02 : (r1 , c1 ) = (1, 1), (r2 , c2 ) = (2, 2), (r3 , c3 ) = (4, 3).
Using the Gaussian elimination we can transform any matrix into a canonical form, this is
what we will call in this article the “LCU decomposition” (for Lower–Canonical–Upper decom-
position). More precisely, we have
Proposition 1.5. For every Q ∈ Rp×m , there exists a unique Q0 ∈ Rp×m such that the following
two properties hold:
(i) There exists an upper triangular matrix U ∈ Rm×m with only ones on its diagonal and there
exists an invertible lower triangular matrix L ∈ Rp×p such that
LQU = Q0 .
We mention that, because of possible zero rows or columns of Q0 , the matrices L and U are
in general not unique.
With this proposition, we can extend the definition of the indices (r1 , c1 ), . . . , (rρ , cρ ) to any
nonzero matrix:
Definition 1.6. For any nonzero matrix Q ∈ Rp×m , we denote by (r1 , c1 ), . . . , (rρ , cρ ) the posi-
tions of the nonzero entries of its canonical form (r1 < · · · < rρ ).
657
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Example 1.7. We illustrate how to find the decomposition of Proposition 1.5 in practice. Con-
sider
⎛ ⎞ ⎛ ⎞
0 1 2 1 1 −1 2
⎜0 2 5⎟ ⎜ 3 5 −1 8⎟
Q1 = ⎜
⎝0
⎟, Q2 = ⎜ ⎟.
1 2⎠ ⎝ 0 1 1 1⎠
4 −4 4 −1 3 6 4
Let us deal with Q1 first. We look at the first row, we take the first nonzero entry as pivot. We
remove the entries to the right on the same row by doing the column substitution C3 ← C3 − 2C2 ,
which gives
⎛ ⎞
⎛ ⎞0 1 0
1 0 0 ⎜0 2 1⎟
Q1 U1 = Q1 ⎝ 0 1 −2 ⎠ = ⎜
⎝0 1
⎟. (6)
0⎠
0 0 1
4 −4 12
We now look at the next row and take as new pivot the first nonzero entry that is not in the column
of the previous pivot, that is, not in C2 . Since there is no entry to the right of this new pivot, there
is nothing to do and we move to the next row. Since this next row has no nonzero element which
is not in C2 , C3 , we move again to the next and last row. We take as new pivot the first nonzero
entry that is not in C2 or C3 and we remove the entries to the right on the same row by doing the
column substitutions C2 ← C2 + C1 and C3 ← C3 − 3C1 , which gives
⎛ ⎞
⎛ ⎞ 0 1 0
1 1 −3 ⎜0 2 1⎟
Q1 U1 U2 = Q1 U1 ⎝ 0 1 0 ⎠ = ⎜
⎝0
⎟.
1 0⎠
0 0 1
4 0 0
Working then on the rows with downward substitutions only (starting with the first row) and
finally normalizing to 1 the remaining nonzero entries, we see that Q1 becomes Q01 of Exam-
ple 1.4. Similarly, it can be checked that the canonical form of Q2 is in fact Q02 of Example 1.4.
Remark 1.8. Observe that we only need to compute the matrix U in order to find the indices
(r1 , c1 ), . . . , (rρ , cρ ).
The uniqueness of the LCU decomposition is less straightforward and we refer for instance
to the arguments in the proof of [12, Theorem 1] or to [17, Appendix A] for a proof.
Remark 1.9. In the Gaussian elimination process described above, we absolutely do not want to
perform any permutation of the rows. This is because we have ordered the speeds of our system
in a particular way (recall (3)). The fact that we use right multiplication by upper triangular
matrices and left multiplication by lower triangular matrices is also dictated by this choice of
order (for instance in [17] the speeds were ordered differently and right multiplication by lower
triangular matrices was considered instead).
658
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
1.3. Literature
An important feature of this result is that it is valid whatever are the internal and boundary
coupling matrices M and Q. In other words, the time T[Rus] () gives an upper bound for the
minimal null control time Tinf (, M, Q) with respect to these matrices. It is also easy to see that
this upper bound is reached (simply take M = 0 and Q the matrix whose entries are all equal
to zero except for q1,m = 1). However, for most of the matrices M and Q, this upper bound is
too large. Indeed, by just slightly restricting the class of such matrices (in particular, for Q), it is
possible to have a strictly better upper bound than T[Rus]().
This fact was already observed in [25], where the author tried to find the minimal null control
time in the particular case of conservation laws (M = 0), rightly by looking more closely at the
properties of the boundary coupling matrix. He could not solve this problem though and he left
it as an open problem ([25, Remark p. 656]). This was eventually solved few years later in [26],
where the author gave an explicit expression of the minimal null control time in terms of some
indices related to Q.
Concerning systems of balance laws (M = 0), finding the minimal null control time for arbi-
trary M and Q is still an open challenging problem. Recently, there has been a resurgence on the
characterization of such a time. A first result in this direction has been obtained in [5] with an
improvement of the upper bound T[Rus] () for a certain class of boundary coupling matrices Q.
More precisely, they considered the class B defined by
B = Q ∈ Rp×m (8) is satisfied for every i ∈ {1, . . . , min {p, m − 1}} , (7)
the i × i matrix formed from the first i rows and the first i columns of Q is invertible, (8)
(it is understood that the set B is empty when m = 1). For this class of boundary coupling
matrices, the authors then showed that the upper bound T[Rus] () can be reduced to the time
T[CN] () defined by
⎧
⎪
⎪
⎨max max Tm+k () + Tk (), Tm () if m ≥ p,
k∈{1,...,p}
T[CN] () = (9)
⎪
⎪
⎩ max Tm+k () + Tk () if m < p.
k∈{1,...,m}
This was first done for some generic internal coupling matrices or under rather stringent condi-
tions ([5, Theorem 1.1 and 1.5]) but the same authors were then able to remove these restrictions
in [8, Theorem 1 and 3].
659
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
On the other hand, when the boundary coupling matrix Q is full row rank, the problem of
finding the minimal null control time, and not only an upper bound, has also been recently com-
pletely solved in [17]. More precisely, it is proved in [17, Theorem 1.12 and Remark 1.3] that
rank Q = p =⇒ Tinf (, M, Q) = max max Tm+k () + Tck (), Tm () ,
k∈{1,...,p}
where we recall that the indices ck are defined in Definition 1.6. We see in this case that the
minimal null control time has the remarkable property to be independent of the internal coupling
matrix M. In particular, this is the same time as the one found for conservation laws in [26],
yet with a more explicit expression. For m > p, this generalizes the aforementioned results of
[5,8] in two ways: firstly, this is a result for arbitrary full row rank boundary coupling matrices
(not only for Q ∈ B) and, secondly, this obviously establishes that no better time can be obtained
(even for Q ∈ B this is not proved in [5,8]). We mention this because the results of the present
paper will share these two features.
For the special case of 2 × 2 systems, the minimal null control time has also been found in
[11] when the boundary coupling matrix (which is then a scalar) is not zero and in [18] when the
boundary coupling is reduced to zero. Notably, in the second situation, the minimal null control
time depends on the behavior of the internal coupling matrix M ([18, Theorem 1.5]).
Finally, we would like to mention the related works [4,9,1] concerning time-dependent sys-
tems and [22,23,20,6,7] for quasilinear systems.
As we have discussed, finding what exactly is the minimal null control time turns out to be a
difficult task. Instead, in this article we propose to look for the smallest and largest values that the
minimal null control time Tinf (, M, Q) can take with respect to the internal coupling matrix M.
Our main result is an explicit and easy-to-compute formula for both of these times. We will also
completely characterize all the parameters and Q for which Tinf (, M, Q) is invariant with
respect to all M ∈ L∞ (0, 1)n×n . We will show that our results generalize all the known works
that have been previously quoted. In the course of the proof we will obtain some new results even
for conservation laws (M = 0), notably with an explicit feedback law stabilizing the system in
the minimal time.
Our proof relies on the notion of equivalent systems, in particular the backstepping method
with the results of [16,21], the introduction of a canonical LU -decomposition for boundary cou-
pling matrix Q in the same spirit as in [17], as well as a compactness-uniqueness method adapted
to the null controllability inspired from the works [8,13].
As we have seen in the previous section, to explicitly characterize Tinf (, M, Q) for arbitrary
M and Q is still a challenging open problem. Instead, we propose to find the smallest and largest
values that it can take with respect to the internal coupling matrix M.
Tinf (, Q) = inf Tinf (, M, Q) M ∈ L∞ (0, 1)n×n ,
Tsup (, Q) = sup Tinf (, M, Q) M ∈ L∞ (0, 1)n×n .
660
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
The main result of the present paper is the following explicit characterization of these two
quantities:
Theorem 1.11. Let ∈ C 0,1 ([0, 1])n×n satisfy (2)-(3) and let Q ∈ Rp×m be fixed.
(i) We have
Tinf (, Q) = max max Tm+rk () + Tck (), Tm+1 (), Tm () , (10)
k∈{1,...,ρ}
where we recall that the indices (rk , ck ) are defined in Definition 1.6.
(ii) We have
Tsup (, Q) = max max Tm+k () + Tck (), Tm+ρ0 +1 () + Tm () , (11)
k∈{1,...,ρ0 }
In the statement of Theorem 1.11, we use the convention that the undefined quantities are
simply not taken into account, which more precisely gives:
Corollary 1.12. Let ∈ C 0,1 ([0, 1])n×n satisfy (2)-(3) and let Q ∈ Rp×m be fixed. The map
M −→ Tinf (, M, Q) is constant on L∞ (0, 1)n×n if, and only if, and Q satisfy
ρ0 = p or 0 < ρ0 < p and max Tm+k () + Tck () ≥ Tm+ρ0 +1 () + Tm () .
k∈{1,...,ρ0 }
(13)
Remark 1.13. In the proof of Theorem 1.11, we will show in fact that the infimum in Tinf (, Q)
and the supremum in Tsup (, Q) are reached for some special matrices M. More precisely, we
will show that:
661
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
• The infimum in Tinf (, Q) is reached for M = 0. In particular, this shows that Tinf (, M, Q) ≥
Tinf (, 0, Q) for any M.
• The supremum in Tsup (, Q) is reached for M = 0 if the condition (13) holds.
• If the condition (13) fails, then the supremum in Tsup (, Q) is reached for the matrix M
whose entries are all equal to zero, except for
where L−1 = (ij )1≤i,j ≤p and L is any matrix L coming from the LCU decomposition of
Q.
for every subset E that contains the matrix M = 0 and the matrix M described in the third item
of this remark.
For instance, if the speeds are constant, then Tinf (, Q) and Tsup (, Q) are also the lower and
upper bounds of the minimal null control times among all the constant matrices M.
Example 1.15. Consider Q1 ∈ R4×3 and Q2 ∈ R4×4 of Example 1.7. For these matrices, the
previous results of the literature [5,8,17] cannot be applied: Q1 , Q2 ∈ B and rank Q1 , Q2 = 3 <
p = 4. We have (recall (5))
Tinf (, Q1 ) = max {T4 () + T2 (), T5 () + T3 (), T7 () + T1 (), T4 (), T3 ()}
= max {T4 () + T2 (), T5 () + T3 ()} ,
Tsup (, Q1 ) = max {T4 () + T2 (), T5 () + T3 (), T6 () + T3 ()}
= max {T4 () + T2 (), T5 () + T3 ()}
= Tinf (, Q1 ),
and
Tinf (, Q2 ) = max {T5 () + T1 (), T6 () + T2 (), T8 () + T3 (), T5 (), T4 ()}
= max {T5 () + T1 (), T6 () + T2 (), T8 () + T3 (), T4 ()} ,
662
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Tsup (, Q2 ) = max {T5 () + T1 (), T6 () + T2 (), T7 () + T4 ()} .
Remark 1.16. If during the computations of the indices (rk , ck ) we arrive at the last column, that
is if we have
ck0 = m, (14)
for some k0 ∈ {1, . . . , p}, then there is no need to find the next indices to be able to compute
Tinf (, Q) and Tsup (, Q) since we know that the corresponding times will not be taken into
account (because of (5)). For instance, for the matrix Q1 of Example 1.7 we can stop after the
very first step (6) since it gives c2 = 3, there is no need to go on and compute U2 .
Remark 1.17. Theorem 1.11 and its corollary generalize all the results of the literature that we
are aware of on the null controllability of systems of the form (1) (except for the special case
n = 2, which has been completely solved in [11,18]):
rank Q = p, (15)
exact and null controllability are equivalent properties for the system (1) (see e.g. [17, Re-
mark 1.3]) and it has been shown in [17, Theorem 4.1] that Tinf (, M, Q) is independent of
M in that situation. Under the rank condition (15), it is clear that ρ0 = p and the condition
(13) is thus satisfied. It then follows from Corollary 1.12 that Tinf (, M, Q) is independent
of M. Therefore, our result encompasses the one of [17].
• When m ≤ p and Q ∈ B (defined in (7)), it has been established in [8, Theorem 1] that
where we recall that T[CN] () is given by (9). In that case, we see that rk = ck = k for every
k ∈ {1, . . . , m − 1} and either ρ0 = m − 1 or ρ0 = m. In all cases, we can check that
max max Tm+k () + Tck (), Tm+ρ0 +1 () + Tm () = T[CN] ().
k∈{1,...,ρ0 }
Therefore, item (ii) of Theorem 1.11 generalizes [8, Theorem 1], which corresponded only to
the inequality “≤” and only valid for matrices Q ∈ B, but excluded for instance the matrices
presented in Example 1.7.
• In fact, when ρ0 = m in the previous point, the minimal null control time does not depend
on M. More generally, if the condition (14) holds for some k0 ≤ ρ0 , then the condition (13)
is satisfied (because of (5)) and it follows from Corollary 1.12 that
For instance, this condition is satisfied when the matrix Q has the block decomposition
663
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Q
Q= , rank Q = m,
Q
The proof of our main result will first consist in transforming our initial system (1) into “equiv-
alent” systems (from a controllability point of view) which have a simpler coupling structure. Let
us make this notion of equivalent systems precise here. We will introduce it for a slightly broader
class of systems than (1) because of the nature of the transformations that we will use in the
sequel, this will be clear from Section 3. All the systems of this paper will have the following
form:
⎧
⎪ ∂y ∂y
⎪
⎪ (t, x) + (x) (t, x) = M(x)y(t, x) + G(x)y− (t, 0),
⎨ ∂t ∂x
y− (t, 1) = u(t), y+ (t, 0) = Qy− (t, 0), (16)
⎪
⎪
⎪
⎩
y(0, x) = y 0 (x),
where M ∈ L∞ (0, 1)n×n and Q ∈ Rp×m as before, and G ∈ L∞ (0, 1)n×m . Therefore, (16) is
similar to (1) but it has the extra term with G. This system is well posed and the notions of
controllability are similarly defined (see Section 2 below).
In what follows, we will refer to a system of the general form (16) as
(, M, Q, G).
When a system does not contain a parameter (M or G) we will use the notation − rather than
writing 0, for instance we will use (, M, Q, −) when the system does not contain G. The
minimal null control time of the system (, M, Q, G) will be denoted by Tinf (, M, Q, G) (for
consistency, we will keep using the notation Tinf (, M, Q) rather than Tinf (, M, Q, −)).
Let us now give the precise definition of what we mean by equivalent systems in this work:
Definition 1.18. We say that two systems (, M1 , Q1 , G1 ) and (, M2 , Q2 , G2 ) are equivalent,
and we write
(, M1 , Q1 , G1 ) ∼ (, M2 , Q2 , G2 ),
such that, for every T > 0, the induced map L̃ : C 0 ([0, T ]; L2 (0, 1)n ) −→ C 0 ([0, T ]; L2 (0, 1)n )
defined by (L̃y)(t) = L(y(t)) for every t ∈ [0, T ] satisfies
L̃(S1 ) = S2 ,
664
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
where Si (i = 1, 2) denotes the space of all the solutions y to the system (, Mi , Qi , Gi ) in
(0, T ).
It is not difficult to check that ∼ is an equivalence relation and that two equivalent systems
share the same controllability properties:
Proposition 1.19. Let (, M1 , Q1 , G1 ) ∼ (, M2 , Q2 , G2 ) be two equivalent systems. Then, for
every T > 0, the system (, M1 , Q1 , G1 ) is null controllable in time T if, and only if, the system
(, M2 , Q2 , G2 ) is null controllable in time T .
In particular, two equivalent systems have the same minimal null control time. However, the
converse is not true in general, an example has been detailed in Appendix A.
Remark 1.20. Let us emphasize that the notion of equivalent systems that we introduced here
does not care how the control from one system is obtained from the control of the other system. It
is different from the notion of (feedback) equivalence introduced in the seminal work [3] in finite
dimension, which was designed to transfer the stabilization properties of one system to another
and thus required a more specific link between the two systems.
Since the proof of our main result involves many transformations, let us give a quick overview
of the main steps before going into detail:
(, M, Q, −) ∼ (, −, Q, G0 ),
for some G0 . It is nothing but a fundamental result of [21,16] that we rephrase here with the
notion of equivalent systems. Consequently, we only have to focus on systems of the form
(, −, Q, G) in the sequel, which have the advantage of having a simpler coupling structure.
2) Reduction of Q. In Section 4, we show that the boundary coupling matrix Q can always be
assumed in canonical form (Definition 1.3):
(, −, Q, G0 ) ∼ (, −, Q0 , G1 ),
for some G1 . This is an important step that greatly simplifies the coupling structure of the
system.
3) Characterization of Tinf (, Q). The previous step allows us to characterize in Section 5 the
smallest value of the minimal null control time. More precisely, we first establish that
inf Tinf (, −, Q0 , G1 ) G1 ∈ L∞ (0, 1)n×m
is equal to the quantity on the right-hand side of the equality (10). This is done by using a
similar argument to the one in [18]. We then show how to deduce the corresponding result
for the initial system (, M, Q, −), thus proving the first part of our main result.
665
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
4) Reduction of G+− . In view of the proof of the second part of our main result, we first show
in Section 6 how to use the canonical form of Q0 to prove that
G1−−
(, −, Q , G ) ∼ (, −, Q , G ),
0 1 0 2
G =
2
,
G2+−
2
gm+i,c k
= 0, ∀k ∈ {1, . . . , ρ} , ∀i ≥ rk . (17)
5) Removal of G−− . In Section 7 we then prove that the coupling term G−− has no influence
on the minimal null control time:
0
Tinf (, −, Q , G ) = Tinf (, −, Q , G ), G =
0 2 0 3 3
.
G2+−
Unlike all the other steps, the proof is not based on the construction of a suitable trans-
formation, it is based on a general compactness-uniqueness method adapted to the null
controllability property and inspired from the previous works [8,13].
6) Characterization of Tsup (, Q). Finally, in Section 8, we characterize the largest value of
the minimal null control time. More precisely, we first show that
0
sup Tinf (, −, Q0 , G3 ) G3 = and G2+− satisfies (17)
G2+−
is equal to the quantity on the right-hand side of the equality (11). We then show how to
deduce the corresponding result for the initial system (, M, Q, −), thus proving the second
part of our main result.
Remark 1.21. All the steps described above are constructive, except for the one invoking a
compactness-uniqueness argument. It would be interesting to be able to replace this step by a
constructive approach (if possible).
Before proceeding to the proof of our main result, we introduce in this section some notations
and recall some results concerning the well-posedness of the non standard systems of the form
(16).
We start with the characteristic curves associated with the system (16).
• First of all, throughout this paper it is convenient to extend λ1, . . . , λn to functions of R (still
denoted by the same) such that λ1 , . . . , λn ∈ C 0,1 (R) and
λ1 (x) < · · · < λm (x) ≤ −ε < 0 < ε ≤ λm+1 (x) < · · · < λm+p (x), ∀x ∈ R, (18)
666
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
for some ε > 0 small enough. Since all the results of the present paper depend only on the
values of λ1 , . . . , λn in [0, 1], they do not depend on such an extension.
• Let χi be the flow associated with λi , i.e. for every (t, x) ∈ R × R, the function s −→
χi (s; t, x) is the solution to the ordinary differential equation (ODE)
⎧
⎨ ∂χi (s; t, x) = λi (χi (s; t, x)), ∀s ∈ R,
∂s (19)
⎩
χi (t; t, x) = x.
The existence and uniqueness of a (global) solution to the ODE (19) follows from the (global)
Cauchy-Lipschitz theorem (see e.g. [14, Theorem II.1.1]). The uniqueness also yields the
important group property
• Let us now introduce the entry and exit times siin (t, x), siout (t, x) ∈ R of the flow χi (·; t, x)
inside the domain [0, 1], i.e. the respective unique solutions to
⎧
⎪
⎪ ∂(χi−1 ) 1 1
⎪
⎨ (θ ; t, x) = = , ∀θ ∈ R,
∂θ ∂χi
∂s χi−1 (θ ; t, x); t, x λi (θ )
⎪
⎪
⎪
⎩χ −1 (x; t, x) = t,
i
which gives
θ
1
χi−1 (θ ; t, x) = t + dξ.
λi (ξ )
x
It follows that
667
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
⎧
⎪
⎪ 1 x
⎪
⎪ 1 1
⎪
⎪si (t, x) = t −
in
dξ, siout (t, x) = t + dξ, if i ∈ {1, . . . , m} ,
⎪
⎪ −λi (ξ ) −λi (ξ )
⎨ x 0
⎪
⎪ x 1
⎪
⎪ 1 1
⎪
⎪ if i ∈ {m + 1, . . . , n} .
⎪si (t, x) = t − siout (t, x) = t +
in
dξ, dξ,
⎪
⎩ λi (ξ ) λi (ξ )
0 x
(21)
• We have the following monotonic properties:
⎧
⎪
⎪ ∂siin ∂siin ∂siout ∂siout
⎪
⎨ > 0, > 0, > 0, > 0, if i ∈ {1, . . . , m} ,
∂t ∂x ∂t ∂x
(22)
⎪
⎪ ∂s in ∂siin ∂siout ∂siout
⎪
⎩ i > 0, < 0, > 0, < 0, if i ∈ {m + 1, . . . , n} ,
∂t ∂x ∂t ∂x
and the following inverse formula, valid for every s, t ∈ R:
• Finally, we introduce the non negative and increasing function φi ∈ C 1,1 (R) defined by
⎧ x
⎪
⎪
⎪ 1
⎪
⎪ dξ if i ∈ {1, . . . , m} ,
⎪
⎨ 0 −λi (ξ )
⎪
φi (x) = (25)
⎪
⎪x
⎪
⎪ 1
⎪
⎪ dξ if i ∈ {m + 1, . . . , n} .
⎪
⎩ λi (ξ )
0
Let us now introduce the notion of solution for systems of the form (16). To this end, we have
to restrict our discussion to the domain where the system evolves, i.e. on (0, T ) × (0, 1), T > 0
being fixed. For every (t, x) ∈ (0, T ) × (0, 1), we have
(s, χi (s; t, x)) ∈ (0, t) × (0, 1), ∀s ∈ (s̄iin (t, x), t),
where we introduced
668
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
s̄iin (t, x) = max 0, siin (t, x) < t.
We now proceed to formal computations in order to introduce the notion of solution for
non smooth functions y. Writing the i-th equation of the system (16) along the characteristic
χi (s; t, x) for s ∈ [s̄iin (t, x), t], and using the chain rules yields the ODE
⎧
⎪ d n m
⎪
⎨ yi (s, χi (s; t, x)) = mij (χi (s; t, x)) yj (s, χi (s; t, x)) + gij (χi (s; t, x)) yj (s, 0) ,
ds
=1 =1
⎪ in
j j
⎪
⎩ yi s̄i (t, x), χi (s̄iin (t, x); t, x) = bi yi0 , ui , y− (·, 0) (t, x),
(26)
where the initial condition bi (yi0 , ui , y− (·, 0))(t, x) is given by the appropriate boundary or initial
conditions in (16):
• for i ∈ {m + 1, . . . , n},
⎧ m
⎪
⎪
⎨ qi−m,j yj (siin (t, x), 0) if siin (t, x) > 0,
bi yi0 , ui , y− (·, 0) (t, x) = j =1 (28)
⎪
⎪
⎩
yi0 (χi (0; t, x)) if siin (t, x) < 0.
Integrating the ODE (26) over s ∈ [s̄iin (t, x), t], we obtain the following system of integral equa-
tions:
n t
yi (t, x) = bi yi0 , ui , y− (·, 0) (t, x) + mij (χi (s; t, x))yj (s, χi (s; t, x)) ds
j =1 in
s̄i (t,x)
m t
+ gij (χi (s; t, x)) yj (s, 0) ds. (29)
j =1 in
s̄i (t,x)
This leads to the following notion of solution called “solution along the characteristics”:
Definition 2.1. Let T > 0, y 0 ∈ L2 (0, 1)n and u ∈ L2 (0, T )m be fixed. We say that a function
y : (0, T ) × (0, 1) −→ Rn is a solution to the system (16) in (0, T ) if
and if the integral equation (29) is satisfied for every i ∈ {1, . . . , n} and for a.e. (t, x) ∈ (0, T ) ×
(0, 1).
669
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Using the Banach fixed-point theorem and suitable estimates, we can establish that the system
(16) is globally well posed in this sense:
Theorem 2.2. For every T > 0, y 0 ∈ L2 (0, 1)n and u ∈ L2 (0, T )m , there exists a unique solution
y ∈ C 0 ([0, T ]; L2 (0, 1)n ) ∩ C 0 ([0, 1]; L2 (0, T )n ) to the system (16) in (0, T ). Moreover, we have
yC 0 ([0,T ];L2 (0,1)n ) + yC 0 ([0,1];L2 (0,T )n ) ≤ C y 0 + u 2
L (0,T ) m , (30)
L2 (0,1) n
For a proof of this result, we refer for instance to [4, Appendix A.2] (see also [5, Lemma 3.2]
in the L∞ setting).
3. Backstepping transformation
In this section, we use a Volterra transformation of the second kind to transform our initial
system (1) into a system with a simpler coupling structure, this is the so-called backstepping
method for partial differential equations. The content of this section is quite standard by now
(yet, formulated differently here), see for instance [21, Section 2.2] (or [5, Section 2]).
First of all, we perform a simple preliminary transformation in order to remove the diagonal
terms in M. This is only a technical step, which is nevertheless necessary in view of the existence
of the transformation that we will use in the next section, see Remark 3.3 below. For convenience,
we introduce the set
M = M ∈ L∞ (0, 1)n×n mii = 0, ∀i ∈ {1, . . . , n} .
Proposition 3.1. There exists a map : L∞ (0, 1)n×n −→ M such that, for every M ∈
L∞ (0, 1)n×n , we have
where E = diag(e1 , . . . , en ) ∈ W 1,∞ (0, 1)n×n is the diagonal matrix whose entries are
⎛ ⎞
x
mii (ξ ) ⎠
ei (x) = exp ⎝− dξ .
λi (ξ )
0
670
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
• Assume now that y is a solution to the system (, M, Q, −) for some y 0 and u and let us
show that ỹ is then a solution to the system (, (M), Q, −) for some ỹ 0 and ũ, where
(M) will be determined below. We do it formally but this can be rigorously justified.
– The initial data is obviously ỹ 0 (x) = E(x)y 0 (x).
– The boundary condition at x = 0 is clearly satisfied since ỹ(t, 0) = y(t, 0).
– Looking at the boundary condition at x = 1, the control ũ is
– Using the equation satisfied by y and the fact that and E commute, a computation shows
that
∂ ỹ ∂ ỹ ∂E
(t, x) + (x) (t, x) = E(x)M(x) + (x) (x) y(t, x).
∂t ∂x ∂x
Now that is clearly identified, similar computations show that, conversely, if ỹ is a solu-
tion to the system (, (M), Q, −) for some ỹ 0 and ũ, then y is a solution to the system
(, M, Q, −) for some y 0 and u.
• Finally, it is clear that (M) ∈ M by construction.
We now recall an important result from [21] and [16] that we present here using the notion of
equivalent system. To this end, we introduce the set
F = A ∈ L∞ (0, 1)n×n A−+ = A+− = 0 .
Theorem 3.2. For every A ∈ F , there exists a map A : M −→ L∞ (0, 1)n×m such that, for
every M ∈ M, we have
x
ỹ(t, x) = y(t, x) − K(x, ξ )y(t, ξ ) dξ,
0
671
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
1
ũ(t) = ỹ− (t, 1) = y− (t, 1) − H (ξ )y(t, ξ ) dξ, (31)
0
where H (ξ ) = K−− (1, ξ ) K−+ (1, ξ ) .
– Using the equation satisfied by y, integrating by parts, and using the boundary condition
satisfied by y at x = 0, we have
∂ ỹ ∂ ỹ
(t, x) + (x) (t, x) =
∂t ∂x
x
∂K ∂K ∂
− (x) (x, ξ ) + (x, ξ )(ξ ) + K(x, ξ ) (ξ ) + M(ξ ) y(t, ξ ) dξ
∂x ∂ξ ∂ξ
0
and provided that the kernel K satisfies the so-called kernel equations:
⎧
⎪
⎨(x) ∂K (x, ξ ) + ∂K (x, ξ )(ξ ) + K(x, ξ ) ∂ (ξ ) + M(ξ ) = 0,
∂x ∂ξ ∂ξ (33)
⎪
⎩
(x)K(x, x) − K(x, x)(x) = M(x).
672
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
then we see that for i = j we shall necessarily have mii = 0. Therefore, it is necessary that
M ∈ M (otherwise the equation (34), and thus the kernel equations (33), have no solution). This
explains why we had to perform a preliminary transformation in Section 3.1 to reduce the general
case to this one.
From [16, Section VI], we know that the kernel equations (33) have a solution (see also [21,
Remark A.2] to see how to deal with space-varying speeds). More precisely, we can extract the
following result:
Theorem 3.4. For every A ∈ F , for every M ∈ M, there exists a unique solution K ∈ L∞ (T )n×n
to the kernel equations (33) with:
As before, the notion of solution is to be understood in the sense of solution along the charac-
teristics. By K ∈ C 0 ((0, 1]; L2 (0, x)n×n ) we mean that K(xn , ·) − K(x, ·)L2 (0,min{xn ,x})n×n →
0 as xn → x, for every x ∈ (0, 1], with a similar definition for K ∈ C 0 ([0, 1); L2 (ξ, 1)n×n ). De-
spite not mentioned in the literature, these important regularities can be deduced from the system
of integral equations satisfied by the kernel. In particular, it shows that H and A (M) defined in
(31) and (32) have the following regularities:
Remark 3.5. The set F corresponds to the set of boundary conditions that are free to choose
for the kernel equations. The freedom for the boundary condition (35) was already used in the
works [15,16,21] in order to give to (A (M))−− a structure of strictly lower triangular matrix.
However, in the present paper this will not be used and it is the other boundary condition (36)
that will turn out to be essential (see Section 6 below).
In this section we perform some transformations to show that we can always assume that the
boundary coupling matrix Q is in canonical form. More precisely, we prove the following result:
673
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Proposition 4.1. For every invertible upper triangular matrix U ∈ Rm×m and every invertible
lower triangular matrix L ∈ Rp×p , there exists a map : L∞ (0, 1)n×m −→ L∞ (0, 1)n×m such
that, for every G ∈ L∞ (0, 1)n×m , we have
Proof. • For any i, j ∈ {1, . . . , n}, we denote by ζij the solution to the ODE
⎧
⎪
⎨ d ζij (s) = λj (ζij (s)) , ∀s ∈ R,
ds λi (s)
⎪
⎩ζ (0) = 0.
ij
• We first prove that, for every invertible upper triangular matrix U ∈ Rm×m , there exists a map
−− : L∞ (0, 1)m×m −→ L∞ (0, 1)m×m such that, for every G ∈ L∞ (0, 1)n×m , we have
−− (G−− )
(, −, Q, G) ∼ , −, QU, .
G+− U
where U −1 = (uik )1≤i,k≤m . Let us first show that this transformation is well defined and
invertible. We can check that, for i ≤ k ≤ m, we have (recall (25))
In particular, for such indices, ζik is a C 1 -diffeomorphism from (0, 1) to a subset of (0, 1)
and thus the transformation (37) is well defined on L2 (0, 1)n . Besides, using the property
ζkj (ζik (x)) = ζij (x) for i ≤ k ≤ j , we can check that its inverse is given by
⎧ m
⎪
⎪
⎨ ukj ỹj (t, ζkj (x)) for k ∈ {1, . . . , m} ,
yk (t, x) = j =k
⎪
⎪
⎩
ỹk (t, x) for k ∈ {m + 1, . . . , n} .
• Assume now that y is a solution to the system (, −, Q, G) for some y 0 and u and let us
for some ỹ 0 and ũ, where
show that ỹ is then a solution to the system (, −, QU, G)
= −− (G−− ) ,
G
G+− U
and where −− (G−− ) will be determined below. Once again, we do it formally but this can
be rigorously justified.
674
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
– The boundary condition at x = 0 is clearly satisfied since ỹ+ = y+ and ỹ− (t, 0) =
U −1 y− (t, 0).
– Looking at the boundary condition at x = 1, the control ũ is
m
ũi (t) = ỹi (t, 1) = uik yk (t, ζik (1)), ∀i ∈ {1, . . . , m} .
k=i
– It is clear that ỹ+ = y+ satisfies the desired equation. Let us now fix i ∈ {1, . . . , m}. A
computation shows that
∂ ỹi ∂ ỹi m
(t, x) + λi (x) (t, x) − g̃ij (x)ỹj (t, 0) =
∂t ∂x
j =1
m
∂ζik ∂yk
uik −λk (ζik (x)) + λi (x) (x) (t, ζik (x))
∂x ∂x
k=i
⎛ ⎞
m m
+ ⎝ uik gk (ζik (x)) − g̃ij (x)uj ⎠ y (t, 0).
=1 k=i j =1
m
uik gk (ζik (x)) − g̃ij (x)uj = 0, ∀ ∈ {1, . . . , m} .
k=i j =1
m
j
g̃ij (x) = uik gk (ζik (x)) uj .
=1 k=i
Now that −− is clearly identified, similar computations show that, conversely, if ỹ is a
for some ỹ 0 and ũ, then y is a solution to the system
solution to the system (, −, QU, G)
(, −, Q, G) for some y and u.
0
• Similarly, we can prove that, for every invertible lower triangular matrix L ∈ Rp×p , there
exists a map +− : L∞ (0, 1)p×m −→ L∞ (0, 1)p×m such that, for every G ∈ L∞ (0, 1)n×m ,
we have
675
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
G−−
(, −, Q, G) ∼ , −, LQ, .
+− (G+− )
((38) is still valid for the indices considered) where L = (ij )1≤i,j ≤p and taking
i
g̃ij (x) = i−m,k−m gkj (ζik (x)),
k=m+1
Thanks to the result of previous section it is from now on sufficient to consider boundary
coupling matrices which are in canonical form. This is a big step forward, which already allows
us to characterize the smallest value of the minimal null control time.
We start with systems of the form (, −, Q, G), we will discuss in the next section how to
deduce the corresponding result for the initial system (, M, Q, −).
Theorem 5.1. Let Q0 ∈ Rp×m be in canonical form, G ∈ L∞ (0, 1)n×m and T > 0 be fixed.
(ii) If T satisfies the condition (39), then the system (, −, Q0 , −) (i.e. with G = 0) is null
controllable in time T with control u = 0.
As for Theorem 1.11, we use the convention that the undefined quantities are simply not taken
into account, which means that the condition (39) is reduced to T ≥ max {Tm+1 (), Tm ()}
when ρ = 0 (i.e. when Q0 = 0).
This result shows in particular that the smallest value that Tinf (, −, Q0 , G) can take with
respect to G ∈ L∞ (0, 1)n×m is equal to the quantity on the right-hand side of the inequality in
(39). This can be extended to arbitrary boundary coupling matrices thanks to Proposition 4.1.
676
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Proof of Theorem 5.1. We use the ideas of the proof of [18, Lemma 3.3].
We point out that for this first step there is no need to assume that Q0 is in canonical form.
Assume that T < max {Tm+1 (), Tm ()}. Then, there exists i ∈ {1, . . . , n} such that T <
Ti (). Let ωi be the open subset defined by
ωi = x ∈ (0, 1) siin (T , x) < 0 . (40)
T < Ti () ⇐⇒ ωi = ∅.
T
m
0 = yi0 (χi (0; T , x)) + gij (χi (s; T , x)) yj (s, 0) ds.
0 j =1
T
m
(Kh)(x) = − gij (χi (s; T , x)) hj (s) ds,
0 j =1
is surjective. This is impossible since its range is clearly a subset of L∞ (ωi ), which is a
proper subset of L2 (ωi ).
2) Suppose now that ρ = 0 (otherwise we are done) and that T is such that
We have seen in the previous step that the condition T ≥ max {Tm+1 (), Tm ()} means
that all the subsets ωi defined in (40) are empty. In particular (recall also (22)),
in
sm+r k
(T , x) > 0, ∀x ∈ (0, 1).
0
677
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Therefore, the null controllability condition ym+rk0 (T , x) = 0 is equivalent to (see (29) and
recall that Q0 is in canonical form)
T
m
0 = yck0 (sm+r
in
k0
(T , x), 0) + gm+rk0 ,j χm+rk0 (s; T , x) yj (s, 0) ds. (41)
in j =1
sm+r (T ,x)
k0
T
m
+ gm+rk0 ,j χm+rk0 (s; T , x) yj (s, 0) ds.
in j =1
sm+r (T ,x)
k0
This leads to a contradiction by using the same argument as at the end of the first step.
3) Finally, it is not difficult to see from (29) that, when G = 0, the control u = 0 brings the
solution of the system (, −, Q0 ) to zero in any time T satisfying (39).
Let us now show how the previous results yield the desired characterization of the smallest
minimal null control time for the initial system (, M, Q, −).
Proof of item (i) of Theorem 1.11. • Let M ∈ L∞ (0, 1)n×n and Q ∈ Rp×m be fixed. Let
T > 0 be such that the system (, M, Q, −) is null controllable in time T .
– By Proposition 3.1 and Theorem 3.2, there exists G ∈ L∞ (0, 1)n×m such that the system
(, −, Q, G) is null controllable in time T .
– From Proposition 4.1, there exists G ∈ L∞ (0, 1)n×m such that the system (, −, Q0 , G)
0
is null controllable in time T , where Q is the canonical form of Q.
– By item (i) of Theorem 5.1 we obtain that T has to satisfy the condition (39).
678
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Let us conclude this section with some interesting remarks on the case M = 0. For M = 0,
we can combine Theorem 5.1 with Proposition 4.1 (with G = 0, in which case their proofs are
greatly simplified) to obtain a completely different proof of [26, Theorems 1 and 2]. Our proof
has several advantages. Firstly, we directly obtain a more explicit expression of the minimal null
control time (see e.g. [17, Remark 1.15]). On the other hand, we do not need to use the so-called
duality and we are able to obtain an explicit control. More precisely, we can extract the following
result from item (ii) of Theorem 5.1 and the proof of Proposition 4.1:
Proposition 5.2. Let Q ∈ Rp×m and T satisfy (39). Then, the system (, −, Q, −) is finite-time
stabilizable with settling time T , with the following explicit feedback law:
m
ui (t) = − uik yk (t, ζik (1)), i ∈ {1, . . . , m} , (42)
k=i+1
where U −1 = (uik )1≤i,k≤m and U is any matrix U coming from the LCU decomposition of Q.
We recall that the previous statement simply means that, if we replace the i-th component of u
by the right-hand side of the formula (42) in the system (1) (with M = 0), then the corresponding
solution satisfies y(T , ·) = 0 for every y 0 ∈ L2 (0, 1)n . We also recall that systems with such
boundary conditions are well posed (see e.g. [5, Section 3] in the L∞ setting).
A similar result was obtained in the proof of [5, Proposition 1.6] when Q ∈ B (defined in
(7)), our result generalizes it to arbitrary Q ∈ Rp×m . Let us illustrate with an example that the
feedback law that we have obtained (42) is also the same as in this reference when Q ∈ B.
Example 5.3. Let us consider the 6 × 6 system used as example in [5, p. 1155]: we take p =
m = 3, the negative speeds are
the positive speeds are arbitrary (subject to (3)), and we take the boundary coupling matrix
⎛ ⎞
1 −1 −1
Q = ⎝1 0 2 ⎠,
a b c
679
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Remark 5.4. Let us also add that another advantage of not using the duality is that it can be useful
to deal with other functional settings (e.g. C 1 , provided that the inequality in (39) is strict).
We are now left with the proof of the second part of Theorem 1.11, which is more difficult
and require more work.
In this section, we will show how to use the canonical structure of the boundary coupling
matrix to remove some coupling terms in the matrix G+− . For any Q ∈ Rp×m , we introduce the
set
C(Q) = G+− ∈ L∞ (0, 1)p×m gm+i,c = 0, ∀k ∈ {1, . . . , ρ} , ∀i ≥ rk
k
Proposition 6.1. Assume that Q0 ∈ Rp×m is in canonical form. Then, there exists a map ϒ :
L∞ (0, 1)p×m −→ C(Q0 ) such that, for every G ∈ L∞ (0, 1)n×m , we have
G−−
(, −, Q0 , G) ∼ , −, Q0 , . (43)
ϒ(G+− )
Proof. We assume that ρ = 0 since otherwise there is nothing to prove. Reproducing the proof
of Theorem 3.2 with the kernel
0 0
K= ,
0 K++
x
(ϒ(G+− ))(x) = G+− (x) − K++ (x, 0)++ (0)Q − 0
K++ (x, ξ )G+− (ξ ) dξ,
0
680
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
This is an uncoupled system with many solutions (as we already know from Theorem 3.4). Let
us find a particular one that guarantees that ϒ(G+− ) ∈ C(Q0 ). Let i, j ∈ {m + 1, . . . , n} be fixed.
The equation for kij is simply
⎧
⎪
⎨λi (x) ∂kij (x, ξ ) + ∂kij (x, ξ )λj (ξ ) + kij (x, ξ ) ∂λj (ξ ) = 0,
∂x ∂ξ ∂ξ (44)
⎪
⎩k (x, x) = 0, if i = j.
ij
• If i ≥ j , then there exists a unique solution to (44) which satisfies kij (x, 0) = aij (x) (aij ∈
L∞ (0, 1) is arbitrary) and it is given by
⎧
⎪ λ (0)
⎨aij (sijin (x, ξ )) j if ξ < ζij (x; 0, 0),
kij (x, ξ ) = λj (ξ )
⎪
⎩
0 if ξ > ζij (x; 0, 0),
• If i < j , then there exists a unique solution to (44) which satisfies kij (1, ξ ) = aij (ξ ) (aij ∈
L∞ (0, 1) is arbitrary) and it is given by
⎧
⎪ λ (ζ (1; x, ξ ))
⎨aij (ζij (1; x, ξ )) j ij if ξ < ζij (x; 1, 1),
kij (x, ξ ) = λj (ξ )
⎪
⎩
0 if ξ > ζij (x; 1, 1).
We choose aij = 0 for i < j , so that kij = 0 for such indices. Let us now fix the remaining aij
to ensure that ϒ(G+− ) ∈ C(Q0 ). To this end, we fix i ∈ {m + 1, . . . , n} such that Ei = ∅, where
Ei = {α ∈ {1, . . . , ρ} | m + rα ≤ i} .
681
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
The (i, cα )-th entry of ϒ(G+− ) is equal to zero if, and only if,
n x
n
0 = gicα (x) − 0
ki (x, 0)λ (0)q−m,c α
− ki (x, ξ )g,cα (ξ ) dξ.
=m+1 0 =m+1
Using the explicit formulas for kij and the assumption that Q0 is in canonical form, for α ∈ Ei
this identity is equivalent to
ζi
(x;0,0)
i
λ (0)
0 = gicα (x) − ai,m+rα (x)λm+rα (0) − in
ai (si (x, ξ )) g,cα (ξ ) dξ.
λ (ξ )
=m+1 0
in
si (x, ζi (x; θ, 0)) = si
in
(θ, 0) = θ,
and isolating the terms for = m + rβ with β ∈ Ei , this gives the following system of Volterra
equations of the second kind:
x
with L∞ kernel
λ (0) ∂ζi
hiα (x, θ ) = gc (ζi (x; θ, 0)) (x; θ, 0),
λ (ζi (x; θ, 0)) α ∂θ
x
Setting
ai = 0, ∀ ∈ Fi ,
we have fiα = gicα and, once fiα is known, the remaining values ai,m+rα , α ∈ Ei , are uniquely
determined by solving the system (45).
Remark 6.2. Let us point out that it is also possible to transform the matrix G−− into a strictly
lower triangular matrix by using the kernel
682
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
K−− 0
K= ,
0 0
and by appropriately choosing some boundary conditions for K−− (this is the same proof as
above with Q0 = Id). In summary, whatever Q ∈ Rp×m and G ∈ L∞ (0, 1)n×m are, we have
shown that we can always find some transformations so that we are reduced to the case where:
• Q is in canonical form.
• G−− is strictly lower triangular.
• G+− ∈ C(Q).
Let us add that, in general, it is not possible to remove more terms by using some transfor-
mations (e.g. the backstepping method). In other words, there is in general no simpler equivalent
system. An example has been detailed in Appendix A. In this sense, systems (, −, Q, G) with
the above structure could be called “in canonical form”.
7. Reduction by compactness-uniqueness
In this section, we show that, even though we can not in general fully remove G−− by using
some transformations (Remark 6.2), nevertheless, the two systems share the same minimal null
control time:
Theorem 7.1. Let Q ∈ Rp×m be fixed. For every G ∈ L∞ (0, 1)n×m , we have
0
Tinf (, −, Q, G) = Tinf , −, Q, . (46)
G+−
Remark 7.2. Let us emphasize once again that it is impossible to prove Theorem 7.1 by using
some transformations to pass from one system to the other (e.g. backstepping). In other words,
these two systems are in general not equivalent (in the sense of Definition 1.18). Therefore, a
different method is necessary to prove Theorem 7.1. We will do it thanks to a compactness-
uniqueness method adapted to the null controllability property.
We will present here a general compactness-uniqueness method adapted to the null controlla-
bility property. We will see in the next section how to use it in order to obtain Theorem 7.1.
First of all, let us briefly recall some basic facts about abstract linear control systems. All along
this section, H and U are two complex Hilbert spaces, A : D(A) ⊂ H −→ H is the generator of
a C0 -semigroup (S(t))t≥0 on H and B ∈ L(U, D(A∗ ) ). Here and in what follows, E denotes
the anti-dual of the complex space E, that is the complex (Banach) space of all continuous
conjugate linear forms. We will use the convention that an inner product of a complex Hilbert
space is conjugate linear in its second argument. One of the reason why we have to consider
complex (and not real) spaces is because we will use below a condition involving the spectral
elements of the operator A, we will explain how to deal with real Banach spaces in practice at
the end of this section in Remark 7.8.
683
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Let us now consider the evolution problem associated with the pair (A, B), i.e.
⎧d
⎨ y(t) = Ay(t) + Bu(t), t ∈ (0, T ),
dt (47)
⎩
y(0) = y 0 ,
where T > 0, y(t) is the state at time t, y 0 is the initial data and u(t) is the control at time t.
Let us recall a standard procedure to define a notion of solution in H to (47) for non smooth
functions. We formally multiply (47) by a smooth function z, integrate over an arbitrary time
interval (0, τ ) ⊂ (0, T ), perform an integration by parts and use the adjoints to obtain the identity
! " τ # $ τ
d ∗
% &
y(τ ), z(τ )H − y , z(0) +
0
y(t), − z(t) − A z(t) dt = u(t), B ∗ z(t) U dt.
H dt H
0 0
Particularizing this identity for the solution z to the so-called adjoint system
⎧ d
⎨− z(t) = A∗ z(t), t ∈ (0, τ ),
dt (48)
⎩
z(τ ) = z1 ,
i.e. z(t) = S(τ − t)∗ z1 , where z1 is arbitrary, this leads to the following notion of solution in H :
Definition 7.3. Let T > 0, y 0 ∈ H and u ∈ L2 (0, T ; U ) be fixed. We say that a function y :
[0, T ] −→ H is a solution to (47) if y ∈ C 0 ([0, T ]; H ) and
! " ! " τ
% &
y(τ ), z 1
− y , z(0) =
0
u(t), B ∗ z(t) U dt, (49)
H H
0
for every τ ∈ (0, T ] and z1 ∈ D(A∗ ), where z ∈ C 0 ([0, τ ]; D(A∗ )) is the solution to the adjoint
system (48).
For the system (47) to be well posed in this sense, the space H has to satisfy some properties.
Definition 7.4. We say that H is an admissible subspace for the system (A, B) if the following
regularity property holds:
τ 2
∗
∀τ > 0, ∃C > 0, B z(t)2 dt ≤ C
z1 , ∀z1 ∈ D(A∗ ), (50)
U H
0
684
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
We recall that, thanks to basic semigroup properties, it is equivalent to prove (50) for one
single τ > 0.
If H is an admissible subspace for (A, B), then the map
τ
∗
% &
z ∈ D(A ) −→
1
u(t), B ∗ z(t) U dt,
0
can be extended to a continuous conjugate linear form on H . Thus, we have a natural definition
for the map τ ∈ [0, T ] −→ y(τ ) ∈ H through the formula (49). It can be proved that this map is
also continuous and that it depends continuously on y 0 and u on compact time intervals (see e.g.
[10, Theorem 2.37]). This establishes the so-called well-posedness of the abstract control system
(47) in H .
Now that we have a notion of continuous solution for the system (47) in the space H , we can
speak of its controllability properties in H .
Definition 7.5. We say that the system (47) is null controllable in time T if, for every y 0 ∈
H , there exists u ∈ L2 (0, T ; U ) such that the corresponding solution y ∈ C 0 ([0, T ]; H ) to the
system (47) satisfies
y(T ) = 0.
It is also well known that controllability has a dual concept named observability. We have the
following characterization (see e.g. [10, Theorem 2.44]):
Theorem 7.6. Let T > 0 be fixed. The system (A, B) is null controllable in time T if, and only
if, there exists C > 0 such that, for every z1 ∈ D(A∗ ),
T
∗
z(0)2H ≤C B z(t)2 dt,
U
0
where z ∈ C 0 ([0, T ]; D(A∗ )) is the solution to the adjoint system (48) (with τ = T ).
After these basic reminders, we can now clearly introduce the general compactness-
uniqueness result on which the proof of Theorem 7.1 will rely on.
Theorem 7.7. Let H and U be two complex Hilbert spaces. Let A : D(A) ⊂ H −→ H be the
generator of a C0 -semigroup (S(t))t≥0 on H and let B ∈ L(U, D(A∗ ) ). We assume that H is
an admissible subspace for (A, B) and that (A, B) satisfies the so-called Fattorini-Hautus test,
namely:
Assume in addition that there exists T0 > 0 such that, for every T > T0 , the following two prop-
erties hold:
685
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
(i) There exist two complex Banach spaces E1 , E2 , a compact operator P : E1 −→ E2 , a linear
operator L : D(A∗ ) −→ E1 and C > 0 such that, for every z1 ∈ D(A∗ ),
⎛ T ⎞
2
z(0)2H ≤ C ⎝ B ∗ z(t)U dt + P Lz1 ⎠ ,
2
(52)
E2
0
⎛ ⎞
2 T
1
Lz ≤ C ⎝z(0)2H + B ∗ z(t)U dt ⎠ ,
2
(53)
E1
0
where z ∈ C 0 ([0, T ]; D(A∗ )) is the solution to the adjoint system (48) (with τ = T ).
(ii) For every 0 < t1 < t2 < T − T0 , there exists C > 0 such that, for every z1 ∈ D(A∗ ),
⎛ ⎞
t2
∗ 2
z(t2 )2H ≤ C ⎝z(t1 )2H + B z(t) dt ⎠ , (54)
U
t1
where z ∈ C 0 ([0, T ]; D(A∗ )) is the solution to the adjoint system (48) (with τ = T ).
Then, the system (A, B) is null controllable in time T for every T > T0 .
The proof of this result is postponed to Appendix B for the sake of the presentation. It is
based on arguments developed in the proofs of [8, Theorem 2] and [13, Lemma 2.6] (see also the
references therein). Let us just mention at this point that, in general, the compactness-uniqueness
method is designed for the exact controllability property. It is only thanks to the property (54)
that we are able to consider the null controllability property here.
Remark 7.8. In most applications we encounter real systems, that is H and U are real Banach
spaces. To apply what precedes, we have to consider their so-called complexifications as well as
the complexifications of the operators A and B. By splitting the complex system (i.e. the system
corresponding to these complexifications) into real and imaginary parts, it is not difficult to check
that the real system is controllable if, and only if, so is the complex system.
Let us now show how to use the general result Theorem 7.7 in order to obtain Theorem 7.1.
We only prove the inequality “≤” in (46) (which is the most important one), the other inequality
can be established similarly. Let then T0 > 0 be such that
0
, −, Q, (55)
G+−
is null controllable in time T0 and let us show that necessarily Tinf (, −, Q, G) ≤ T0 . This will
follow from Theorem 7.7 once we will have checked that the system (, −, Q, G) satisfies all
the assumptions of this result.
686
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
First of all, we have to recast the system (, −, Q, G) as an abstract evolution system of the
form (47). This is quite standard. To identify what are the operators A and B (in fact, we first find
A∗ and B ∗ ), we repeat the procedure that led to Definition 7.3 on the system (16) (with M = 0),
where taking the adjoints is replaced by an integration by parts in space. This gives the following.
∂y
(Ay)(x) = −(x) (x) + G(x)y− (0), x ∈ (0, 1),
∂x
with domain
D(A) = y ∈ H 1 (0, 1)n y− (1) = 0, y+ (0) = Qy− (0) .
• It is clear that D(A) is dense in H since it contains Cc∞ (0, 1)n . A computation shows that
⎧ ⎫
⎨ 1 ⎬
D(A∗ ) = z ∈ H 1 (0, 1)n z− (0) = R ∗ z+ (0) + K(ξ )∗ z(ξ ) dξ, z+ (1) = 0 ,
⎩ ⎭
0
∂z ∂
(A∗ z)(x) = (x) (x) + (x)z(x), x ∈ (0, 1).
∂x ∂x
• The control operator B ∈ L(U, D(A∗ ) ) is given for every u ∈ U and z ∈ D(A∗ ) by
Note that B is well defined since Bu is continuous on H 1 (0, 1)n (by the trace theorem
H 1 (0, 1)n → C 0 ([0, 1])n ) and since the graph norm ·D(A∗ ) and ·H 1 (0,1)n are equivalent
norms on D(A∗ ).
• Finally, the adjoint B ∗ ∈ L(D(A∗ ), U ) is given for every z ∈ D(A∗ ) by
687
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
We can prove that A is closed and that both A, A∗ are quasi-dissipative, so that A generates a
C0 -semigroup by a well-known corollary of Lumer-Phillips theorem.
Since the other properties to check depend on the adjoint system, it is convenient to write it
explicitly:
⎧
⎪
⎪
∂z ∂z
(t, x) + (x) (t, x) = −
∂
⎪
⎪ (x)z(t, x),
⎪
⎪ ∂t ∂x ∂x
⎪
⎪
⎨ 1
⎪z− (t, 0) = R z+ (t, 0) + K(ξ )∗ z(t, ξ ) dξ, z+ (t, 1) = 0,
∗
⎪
⎪
⎪
⎪
⎪
⎪ 0
⎪
⎩
z(T , x) = z (x).
1
Using the method of characteristics it is easy to prove the estimate (50) for τ ≤ T1 (), which
shows that H is an admissible subspace for (A, B).
Therefore, the abstract control system is well posed in H . To rigorously justify that this pair
(A, B) is “the” abstract form of (, −, Q, G) we have to reason in terms of notions of solution:
Proposition 7.9. The solution to system (, −, Q, G) in the sense of Definition 2.1 coincides
with the solution to abstract system (47) in the sense of Definition 7.3 corresponding to the pair
(A, B) introduced above.
Proof. We argue by approximation. Let y 0 ∈ L2 (0, 1)n , u ∈ L2 (0, T )m be fixed and let y be the
corresponding solution to system (, −, Q, G) in the sense of Definition 2.1.
• We take two approximations (y 0,k )k ⊂ H01 (0, 1)n and (uk )k ⊂ H01 (0, T )m such that
Let y k be the solution corresponding to y 0,k and uk in the sense of Definition 2.1. Since y 0,k
and uk obviously satisfy the C 0 compatibility conditions
(for instance, by adapting the fixed point approach of [4, Appendix A.2] in the above space –
the regularity G ∈ L∞ (0, 1)n×m is enough after a suitable change of variable). In particular,
y k ∈ H 1 ((0, T ) × (0, 1))n and it satisfies (16) almost everywhere (with M = 0, y = y k ,
u = uk and y 0 = y 0,k ).
• Repeating the procedure that led to Definition 7.3 we easily check that y k is the solution to
abstract system (47) in the sense of Definition 7.3, i.e. it satisfies identity (49) (with y = y k ,
y 0 = y 0,k and u = uk ). Using (56) and
688
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
(this follows from (30) and (56)), we can pass to the limit k → +∞ in this identity to obtain
that y is the solution to abstract system (47) in the sense of Definition 7.3.
We will now check that our pair (A, B) satisfies the assumptions of Theorem 7.7.
• The Fattorini-Hautus test (51) is easy to check. Indeed, if λ ∈ C and z ∈ D(A∗ ) are such that
A∗ z = λz and B ∗ z = 0, then in particular z ∈ H 1 (0, 1)n solves the system of linear ODEs
⎧
⎪
⎨ ∂z (x) = (x)−1 − ∂ (x) + λIdRn×n z(x), x ∈ (0, 1),
∂x ∂x
⎪
⎩
z(1) = 0,
so that z = 0 by uniqueness.
Below, C denotes a positive number that may change from line to line but that never depends
on z1 or t.
• The inequality (54) is also not difficult to check. Indeed, for 0 < t1 < t2 < T − T0 , using the
method of characteristics, we have
⎛ ⎞
t2
z− (t2 , ·)2L2 (0,1)m ≤ C ⎝z− (t1 , ·)2L2 (0,1)m + z− (t, 1)2C m dt ⎠ ,
t1
and, provided that T0 ≥ Tm+1 () and using that z+ (·, 1) = 0, we also have
We recall that, since the system (55) is null controllable in time T0 by assumption, we nec-
essarily have T0 ≥ Tm+1 () (see the first step of the proof of Theorem 5.1).
• Let us now investigate the estimate (52). Let T > T0 . We will prove that there exists H ∈
L∞ ((0, T ) × (0, T ))m×m such that, for every z1 ∈ L2 (0, 1)n ,
⎛ 2 ⎞
T T T
⎜ ⎟
z(0, ·)2L2 (0,1)n ≤ C ⎝ z− (t, 1)C m dt + H (t, s)z− (s, 0) ds
2
dt ⎠ . (57)
0 0 0 Cm
Let us first make some preliminary observations. We denote by ζ the solution to the adjoint
system of (55) in (0, T ) with final data z1 , and we set
θ = z − ζ. (58)
Clearly, it satisfies
689
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
⎧
⎪
⎪
∂θ ∂θ
(t, x) + (x) (t, x) = −
∂
⎪
⎪ (x)θ (t, x),
⎪
⎪ ∂t ∂x ∂x
⎪
⎪
⎨ 1 1
⎪
⎪ θ− (t, 0)=R θ+ (t, 0) + K+− (ξ ) θ+ (t, ξ ) dξ + K−− (ξ )∗ z− (t, ξ ) dξ,
∗ ∗
θ+ (t, 1)=0,
⎪
⎪
⎪
⎪ 0 0
⎪
⎪
⎩
θ (T , x) = 0.
θ+ = 0. (59)
Consequently, θ− solves
⎧
⎪
⎪
∂θ−
(t, x) + −− (x)
∂θ−
(t, x) = −
∂−−
⎪
⎪ (x)θ− (t, x),
⎪
⎪ ∂t ∂x ∂x
⎪
⎪
⎨ 1
⎪
⎪θ− (t, 0) = K−− (ξ )∗ z− (t, ξ ) dξ,
⎪
⎪
⎪
⎪ 0
⎪
⎪
⎩
θ− (T , x) = 0.
Since T > T0 ≥ Tm (), using the method of characteristics, it is not difficult to see that, for
t ∈ (0, T ), we have
1 2
θ− (t, 0)2C m = K−− (ξ ) z− (t, ξ ) dξ
∗
0 Cm
⎛ 2 ⎞
T t
⎜ ⎟
≤ C ⎝
H (t, s)z− (s, 0) ds + z− (s, 1)C m ds ⎠ ,
2
(60)
m
t C 0
for some H ∈ L∞ ((0, T ) × (0, T ))m×m independent of z1 . Let us now prove the desired
estimate (57). Since by assumption the system (55) is null controllable in time T0 , and thus
in time T > T0 , the solution ζ to its adjoint system satisfies (see Theorem 7.6)
T
ζ (0, ·)2L2 (0,1)n ≤C ζ− (t, 1)2C m .
0
690
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
0
T T
≤ 2 θ− (0, ·)2L2 (0,1)m + 2C θ− (t, 1)2C m + 2C z− (t, 1)2C m .
0 0
On the other hand, using the method of characteristics and the condition θ− (T , ·) = 0, we
have
T T
θ− (0, ·)2L2 (0,1)m + θ− (t, 1)2C m ≤C θ− (t, 0)2C m dt.
0 0
defined by
T
(P v)(t) = H (t, s)v(s) ds, (Lz1 )(s) = z− (s, 0).
0
From the previous point, (52) is fulfilled. It is also well-known that operators of the form of
P are compact. Finally, we easily check with the method of characteristics that L satisfies
the remaining estimate (53). This concludes the proof of Theorem 7.1.
In this last section we will finally prove the second part of Theorem 1.11.
We start with systems of the form (, −, Q, G), we will deal with the initial system
(, M, Q, −) in the next section.
Theorem 8.1. Let Q0 ∈ Rp×m be in canonical form and let G ∈ L∞ (0, 1)n×m with G−− = 0
and G+− ∈ C(Q0 ).
691
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
gm+ρ0 +1,m = 1.
As for Theorem 1.11, we use the convention that the undefined quantities are simply not taken
into account, which more precisely gives:
In the second part of the statement of Theorem 8.1 we only discussed the case when (13) fails
since otherwise the time on the right-hand side of the inequality in (61) coincides with the time
on the right-hand side of the inequality in (39) and it follows from item (i) of Theorem 5.1 that
item (i) of Theorem 8.1 then becomes a necessary and sufficient condition.
This result shows in particular that the largest value that Tinf (, −, Q0 , G) can take with re-
spect to G ∈ L∞ (0, 1)n×m when G−− = 0 and G+− ∈ C(Q0 ) is equal to the quantity on the
right-hand side of the inequality in (61). This can be extended to arbitrary boundary coupling
matrices and arbitrary G ∈ L∞ (0, 1)n×m thanks to Proposition 4.1, Proposition 6.1 and Theo-
rem 7.1.
Proof of Theorem 8.1. 1) We begin with the proof of the first item. Let first i ∈ {1, . . . , m} be
fixed. Since T ≥ Ti (), which means that siin (T , x) > 0 for every x ∈ (0, 1) as we have seen
in the first step of the proof of Theorem 5.1, and since G−− = 0 by assumption, the null
controllability condition yi (T , ·) = 0 is equivalent to (see (29) and (27))
ui siin (T , ·) = 0 in (0, 1).
Since i ≤ m, the map x −→ siin (T , x) is non decreasing (see (22)) with siin (T , 1) = T . Thus,
the previous condition is also equivalent to
ui = 0 in siin (T , 0), T . (62)
2) Let us now consider i ∈ {m + 1, . . . , n}. Since T ≥ Ti (), the null controllability condition
yi (T , x) = 0 is equivalent to (see (29) and (28))
ai (x) + bi (x) = 0,
where
692
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
⎛ ⎞
T
m
⎜ 0 ⎟
ai (x) = ⎜q gij (χi (s; T , x)) yj (s, 0) ds ⎟
⎝ i−m,j yj si (T , x), 0 +
in
⎠,
j =1
siin (T ,x)
j∈
/ c1 ,...,cρ0
and
⎛ ⎞
ρ0 T
⎜ 0 ⎟
bi (x) = ⎜q gick (χi (s; T , x)) yck (s, 0) ds ⎟
⎝ i−m,ck yck si (T , x), 0 +
in
⎠.
k=1
siin (T ,x)
• We first consider the case i ≥ m + ρ0 + 1 (which happens only if ρ0 < p). Clearly, we have
bi = 0 in that situation since Q0 is in canonical form, G+− ∈ C(Q0 ) and (12). Let us show
that we can choose uj for j ∈ / c1 , . . . , cρ0 so that ai = 0 as well. Since x −→ siin (T , x)
is non increasing for i ≥ m + 1 (recall (22)), it is sufficient to choose it such that
yj (·, 0) = 0 in siin (T , 1), T . (63)
As a result, we see that (63) holds if, and only if, (see (29), (27) and recall that G−− = 0)
uj (sjin (·, 0)) = 0 in siin (T , 1), T .
Observe that this is compatible with (62) since these two intervals are disjoint.
• Let us now consider the case i ≤ m + ρ0 (which happens only if ρ0 = 0). Since Q0 is in
canonical form, G+− ∈ C(Q0 ) and (12), we see that ai (x) + bi (x) = 0 is equivalent to
ρ0 T
ai (x) + yci−m siin (T , x), 0 + gick (χi (s; T , x)) yck (s, 0) ds = 0. (64)
k=i−m+1 in
si (T ,x)
693
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Let us show that we can choose uc1 , . . . , ucρ0 so that this identity is satisfied. By assump-
tion, we have T ≥ Ti () + Tci−m () for every i ∈ {m + 1, . . . , m + ρ0 }. As in the previous
point we can check that this condition can be written as
As a result, we see that (64) holds if, and only if, (see (29), (27) and recall that G−− = 0)
ρ0 T
uci−m scini−m (siin (T , x), 0) = −ai (x) − gick (χi (s; T , x)) yck (s, 0) ds.
k=i−m+1 in
si (T ,x)
(the map x −→ scini−m (siin (T , x), 0) is non increasing and siin (T , 0) = T ). Observe once
again that this is compatible with (62) since these two intervals are disjoint.
This concludes the proof of the first item (i) of Theorem 8.1.
3) Let us now prove item (ii) of Theorem 8.1. Assume that the condition (13) fails, let G be
the constant matrix introduced in the statement, and assume that the corresponding system
(, −, Q0 , G) is null controllable in time T . Since (13) fails, the condition (61) is simply
Since the system (, −, Q0 , G) is null controllable in time T by assumption, the following
2 × 2 subsystem also has to be null controllable in time T :
⎧
⎪ ∂ym ∂ym
⎪
⎪ (t, x) + λm (x) (t, x) = 0,
⎪
⎪
⎨ ∂t ∂x
∂ym+ρ0 +1 ∂ym+ρ0 +1
⎪ (t, x) + λm+ρ0 +1 (x) (t, x) = ym (t, 0),
⎪
⎪ ∂t ∂x
⎪
⎪
⎩
ym (t, 1) = um (t), ym+ρ0 +1 (t, 0) = qρ00 +1,m ym (t, 0).
Let us show that, whether qρ00 +1,m = 1 or qρ00 +1,m = 0, we necessarily have (65). If
qρ00 +1,m = 1, then this follows from item (i) of Theorem 5.1. Let us then consider the case
qρ00 +1,m = 0. As before, it is clearly necessary that T ≥ Tm+ρ0 +1 () and, under this condi-
tion, the null controllability condition ym+ρ0 +1 (T , x) = 0 becomes equivalent to (see (29)
and (28))
694
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
T
ym (s, 0) ds = 0.
in
sm+ρ (T ,x)
0 +1
x in
∂sm+ρ0 +1
in
ym (sm+ρ0 +1
(T , ξ ), 0) (T , ξ ) dξ = 0.
∂ξ
0
in
ym (sm+ρ0 +1
(T , ·), 0) = 0 in (0, 1).
It is now not difficult to see that we can choose um such that this condition holds if, and only
if, we have (65).
Let us now show how to combine all the previous results in order to obtain the desired char-
acterization of the largest minimal null control time for the initial system (, M, Q, −).
for every G ∈ L∞ (0, 1)n×m with G−− = 0 and G+− ∈ C(Q0 ), where Q0 is the canonical
form of Q.
• By Theorem 7.1, this inequality remains true for every G ∈ L∞ (0, 1)n×m with G+− ∈
C(Q0 ).
• By Proposition 6.1, this inequality remains true for every G ∈ L∞ (0, 1)n×m .
• By Proposition 4.1, this inequality remains true by changing Q0 into Q.
• By Proposition 3.1 and Theorem 3.2, this inequality remains true for the system
(, M, Q, −) for any M ∈ L∞ (0, 1)n×n .
In summary, we have established the following upper bound:
Tinf (, M, Q) ≤ max max Tm+k () + Tck (), Tm+ρ0 +1 () + Tm () ,
k∈{1,...,ρ0 }
695
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
• Then, we know from Theorem 8.1 that this upper bound is the minimal null control time
of the system (, −, Q0 , G) for the constant matrix G ∈ Rn×m whose entries are all equal
to zero except for
gm+ρ0 +1,m = 1.
*−− U
U −1 G
* =
(G) , * ∈ Rn×m .
∀G
*
LG+− U
* is the matrix whose entries are all equal to zero except for
Therefore, G
where K is the solution to the kernel equations (33) with additional boundary conditions
(35)-(36) provided by A. Let us rewrite these kernel equations by blocks:
⎧ ∂K−− ∂K−−
⎪
⎪−− (x) ∂x (x, ξ ) + ∂ξ (x, ξ )−− (ξ )
⎪
⎪
⎪
⎨
∂−−
⎪
⎪ +K −− (x, ξ ) (ξ ) + M −− (ξ ) + K−+ (x, ξ )M+− (ξ ) = 0,
⎪
⎪ ∂ξ
⎪
⎩
−− (x)K−− (x, x) − K−− (x, x)−− (x) = M−− (x).
⎧ ∂K−+ ∂K−+
⎪
⎪−− (x) (x, ξ ) + (x, ξ )++ (ξ )
⎪
⎪
⎪
⎨
∂x ∂ξ
∂++
⎪
⎪ +K−− (x, ξ )M−+ (ξ ) + K−+ (x, ξ ) (ξ ) + M++ (ξ ) = 0,
⎪
⎪ ∂ξ
⎪
⎩
−− (x)K−+ (x, x) − K−+ (x, x)++ (x) = M−+ (x).
696
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
⎧ ∂K+− ∂K+−
⎪
⎪++ (x) (x, ξ ) + (x, ξ )−− (ξ )
⎪
⎪
⎪
⎨
∂x ∂ξ
∂−−
⎪
⎪ +K +− (x, ξ ) (ξ ) + M −− (ξ ) + K++ (x, ξ )M+− (ξ ) = 0,
⎪
⎪ ∂ξ
⎪
⎩
++ (x)K+− (x, x) − K+− (x, x)−− (x) = M+− (x).
⎧ ∂K++ ∂K++
⎪
⎪++ (x) (x, ξ ) + (x, ξ )++ (ξ )
⎪
⎪
⎪
⎨
∂x ∂ξ
∂++
⎪
⎪ +K+− (x, ξ )M−+ (ξ ) + K++ (x, ξ ) (ξ ) + M++ (ξ ) = 0,
⎪
⎪ ∂ξ
⎪
⎩
++ (x)K++ (x, x) − K++ (x, x)++ (x) = M++ (x).
Note that the subsystems satisfied by (K−− , K−+ ) and (K+− , K++ ) are not coupled. By
uniqueness of the solution to these equations (see Theorem 3.4), we see that
M−+ = 0 =⇒ K−+ = 0,
M++ = A++ = 0, M−+ = 0 =⇒ K++ = 0,
M−− = A−− = 0, K−+ = 0 =⇒ K−− = 0.
Let i ∈ {m + 1, . . . , n} and j ∈ {1, . . . , m} be fixed. The equation for kij is now simply
697
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
ζij sijin (x, ξ ); x, ξ = sijin (x, ξ ).
Thus, the desired condition kij (·, 0) = −ĝij /λj (0) is equivalent to
Acknowledgments
This project was supported by National Natural Science Foundation of China (Nos. 12122110
and 12071258), Young Scholars Program of Shandong University (No. 2016WLJH52) and Na-
tional Science Centre, Poland UMO-2020/39/D/ST1/01136. For the purpose of Open Access,
the authors have applied a CC-BY public copyright licence to any Author Accepted Manuscript
(AAM) version arising from this submission.
In this appendix we present an explicit example of hyperbolic systems which are not equiva-
lent in the sense of Definition 1.18. This example is important to illustrate that, in general, it is
not possible to obtain a simpler system than the one we obtained in the present article if we only
use invertible transformations (see Remark 6.2). It also motivates the use of the compactness-
uniqueness method to establish the important result Theorem 7.1. We refer to [5, Section 4.3] for
a close but different example.
We consider the following simple 3 × 3 systems with constant coefficients:
⎧
⎪ ∂y1 ∂y1
⎪
⎪ (t, x) − (t, x) = 0,
⎪ ∂t
⎪ ∂x
⎪
⎨
∂y2 1 ∂y2
(t, x) − (t, x) = ay1 (t, 0), (67)
⎪
⎪ ∂t 2 ∂x
⎪
⎪
⎪
⎪
⎩ ∂y3 (t, x) + ∂y3
(t, x) = by2 (t, 0),
∂t ∂x
where a, b ∈ R are some parameters, and with boundary conditions
y1 (t, 1) = u1 (t),
y3 (t, 0) = y1 (t, 0). (68)
y2 (t, 1) = u2 (t),
• Q is in canonical form.
698
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Clearly, ρ = ρ0 = 1 and (r1 , c1 ) = (1, 1). It follows from the results of the present article
(actually, a direct proof is also possible) that the minimal null control time of the system (67)-(68)
is
In particular, the system (67)-(68) is null controllable in time T for every T > 2. Let us now
study the null controllability properties of this system in this critical time:
Proposition A.1. The system (67)-(68) is null controllable in time T = 2 if, and only if,
2
π
ab ∈
/= − + kπ k∈N . (69)
2
Remark A.2. It follows from this result and Proposition 1.19 that
In particular, it is not possible to transform the system (, −, Q, Gab ) into (, −, Q, G0b ) or
(, −, Q, Ga0 ) when ab ∈ (in other words, we cannot remove (Gab )−− nor (Gab )+− in this
case).
Proof of Proposition A.1. The solution to the system (67)-(68) is explicit (see Section 2):
u1 (t − 1 + x) if t − 1 + x > 0,
y1 (t, x) =
y10 (t + x) if t − 1 + x < 0,
⎧
⎪ t
⎪
⎪
⎪
⎪u2 (t − 2(1 − x)) + a y1 (s, 0) ds if t − 2(1 − x) > 0,
⎪
⎪
⎨ t−2(1−x)
y2 (t, x) =
⎪
⎪ t
⎪
⎪ t
⎪
⎪ 0
+ x + a y1 (s, 0) ds if t − 2(1 − x) < 0,
⎪y
⎩ 2 2
0
⎧
⎪ t
⎪
⎪
⎪y1 (t − x, 0) + b
⎪ y2 (s, 0) ds if t − x > 0,
⎪
⎪
⎨ t−x
y3 (t, x) =
⎪
⎪ t
⎪
⎪
⎪
⎪y (−t + x) + b y2 (s, 0) ds
0
if t − x < 0.
⎪
⎩ 3
0
Clearly, the null controllability condition y1 (2, ·) = 0 is satisfied if, and only if,
699
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
u1 = 0 in (1, 2).
Similarly, the null controllability condition y2 (2, ·) = 0 holds if, and only if,
2
u2 (t) = −a y1 (s, 0) ds, t ∈ (0, 2).
t
Thus, the control u2 is uniquely determined once the values of the control u1 in (0, 1) are known.
The remaining condition y3 (2, x) = 0 is equivalent to
2
y1 (2 − x, 0) + b y2 (s, 0) ds = 0,
2−x
and thus to
2 s 1 2 s
u1 (1 − x) + b y20 ds + abx y10 (θ ) dθ + ab u1 (θ − 1) dθ ds = 0.
2
2−x 0 2−x 1
1 s
u1 (t) + ab u1 (σ ) dσ ds = f (t), t ∈ (0, 1), (70)
t 0
where we introduced the following function depending only on the initial data:
2 s 1
f (t) = −b y20 ds − ab(1 − t) y10 (θ ) dθ. (71)
2
1+t 0
Since K is compact, the Fredholm alternative says that (72) has a solution if, and only if,
f
∈ (ker(Id − K ∗ ))⊥ . (73)
0
700
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
We can check that this ODE has a nonzero solution if, and only if,
ab ∈ ,
where is the set introduced in (69). It follows that we have two possibilities:
/ , then ker(Id − K ∗ ) = {0} and (72) has a (unique) solution u1 . This shows that the
• If ab ∈
system (67)-(68) is null controllable intime T = 2.
α̃
• If ab ∈ , then there exists a nonzero ∈ ker(Id − K ∗ ). Necessarily, α̃ = 0 and thus
β̃
It is clear that we can construct y20 and y10 that satisfy (71) for this f (take for instance y10 = 0
and y20 (x) = f (2x − 1)/b for x ∈ [1/2, 1] and y20 (x) = 0 otherwise, note that b = 0 in the
case considered). For such a f , the condition (73) fails and thus there is no corresponding
solution u1 to (70), meaning that the system (67)-(68) is not null controllable in time T =
2.
Remark A.3. We have seen during the proof that, when ab ∈ / , the control that brings the
solution to zero in the critical time T = 2 is unique (it can also be written explicitly).
The goal of this appendix is to give a proof of Theorem 7.7. It is inspired from the proofs of
[8, Theorem 2] and [13, Lemma 2.6] (see also the references therein).
Here and in what follows, it will be more convenient to work with the expression S(t)∗ z1
rather than z(t) = S(T − t)∗ z1 . The corresponding assumptions (52), (53) and (54) become:
⎛ T ⎞
2 2 2
S(T )∗ z1 ≤ C ⎝ B ∗ S(t)∗ z1 dt + P Lz1 ⎠ , (74)
H U E2
0
701
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
⎛ ⎞
2 2 T 2
1
Lz ≤ C ⎝S(T )∗ z1 + B ∗ S(t)∗ z1 dt ⎠ , (75)
E1 H U
0
⎛ ⎞
T−t1
2 2 2
⎜ ∗ ⎟
S(T − t2 )∗ z1 ≤ C ⎝S(T − t1 )∗ z1 + B S(t)∗ z1 dt ⎠ . (76)
H H U
T −t2
1) Let T > T0 be fixed. By duality (see Theorem 7.6), we have to prove that there exists C > 0
such that, for every z1 ∈ D(A∗ ),
2 T 2
∗ 1
S(T ) z ≤ C B ∗ S(t)∗ z1 dt. (77)
H U
0
We argue by contradiction and assume that the observability inequality (77) does not hold.
Then, there exists a sequence (zn1 )n≥1 ⊂ D(A∗ ) such that, for every n ≥ 1,
2 T 2
∗ 1
S(T ) zn > n B ∗ S(t)∗ zn1 dt.
H U
0
In particular S(T )∗ zn1 = 0 and we can normalize zn1 , still denoted by the same, in such a way
that
T 2
∗
S(T )∗ zn1 = 1, B S(t)∗ zn1 dt −−−−→ 0.
H U n→+∞
0
Since P is compact, we can extract a subsequence, still denoted by (zn1 )n≥1 , such that
Using now the estimate (74), we obtain that (S(T )∗ zn1 )n≥1 is a Cauchy sequence in H , and
thus converges: there exists f ∈ H such that
NT = {0} , (78)
702
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Indeed, if f ∈ Nτ2 and (zn1 )n≥1 ⊂ D(A∗ ) denotes an associated sequence, then we easily
check that f ∈ Nτ1 by considering the sequence (S(τ2 − τ1 )∗ zn1 )n≥1 .
3) Let us now show that
associated sequence. In particular, for every k ≥ 1, there exists nk ≥ 1 such that, denoting by
w 1,k = zn1,k
k , we have
1 ∗ 1
S(τ )∗ w 1,k − f k ≤ , B S(·)∗ w 1,k ≤ , ∀k ≥ 1.
H k L2 (0,τ ;U ) k
Since (f k )k≥1 is bounded, so is (S(τ )∗ w 1,k )k≥1 . Using the same reasoning as in Step 1), we
deduce from the estimates (75) and (74) that (S(τ )∗ w 1,k )k≥1 has a Cauchy subsequence. It
follows that (f k )k≥1 has a Cauchy subsequence as well, and thus has a converging subse-
quence.
4) The next step is to establish that
Let then f ∈ Nτ . By definition, there exists a sequence (zn1 )n≥1 ⊂ D(A∗ ) such that
As before, it follows from the estimates (75) and (74) that there exists g ∈ H such that
703
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
f = S(ε)∗ g.
Let us now prove that g ∈ D(A∗ ). By definition of the domain of the generator of a semi-
group, we have to show that, for any sequence tn > 0 with tn → 0 as n → +∞, the sequence
S(tn )∗ g − g
un =
tn
converges in H as n → +∞ and that its limit does not depend on the sequence (tn )n . Let
n0 ≥ 1 be large enough so that tn ≤ ε for every n ≥ n0 . From (84) and (83) we easily see that
Thus,
un ∈ Nτ −ε , ∀n ≥ n0 .
Let now μ ∈ ρ(A∗ ) = ∅ be fixed and let us introduce the following norm on Nτ −ε :
z−1 = (μ − A∗ )−1 z .
H
S(tn )∗ − Id
(μ − A∗ )−1 un = (μ − A∗ )−1 g −−−−→ A∗ (μ − A∗ )−1 g in H. (86)
tn n→+∞
Therefore, (un )n≥n0 is a Cauchy sequence in Nτ −ε for the norm ·−1 . Since Nτ −ε is finite
dimensional (recall (80)), all the norms are equivalent on Nτ −ε . Thus, (un )n≥n0 is a Cauchy
sequence for the usual norm ·H as well and, as a result, converges for this norm. It is clear
from (86) that its limit does not depend on the sequence (tn )n . This shows that g ∈ D(A∗ )
and thus f = S(ε)∗ g ∈ D(A∗ ). In addition, we have
Nτ ⊂ ker B ∗ , ∀τ ∈ (T0 , T ).
Let ε ∈ (0, τ − T0 ) be arbitrary. We use the same notations as in the previous step. Since,
by assumption, H is an admissible subspace for (A, B) (see Definition 7.4), the map z1 ∈
D(A∗ ) −→ B ∗ S(ε − ·)∗ z1 ∈ L2 (0, ε; U ) can be extended to a bounded linear operator ∈
L(H, L2 (0, ε; U )). From (84) and continuity of , we have
704
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
Since zn1 ∈ D(A∗ ), we have ( S(τ − ε)∗ zn1 )(t) = B ∗ S(τ − t)zn1 for t ∈ (0, ε). From (83) and
uniqueness of the limit, we deduce that
g = 0.
Since g ∈ D(A∗ ), we have ( g)(t) = B ∗ S(ε − t)∗ g and the map t ∈ [0, ε] −→ B ∗ S(ε − t)∗ g
is continuous. It follows that
B ∗ f = B ∗ S(ε)∗ g = ( g)(0) = 0.
6) Next, we observe that there exist τ ∈ (T0 , T ) and ε ∈ (0, τ − T0 ) such that
Nτ = Nτ −ε . (87)
Indeed, from (80) and (79), the sequence of integers (dim NT −(T −T0 )/k )k≥2 is non-increasing
and thus stationary: there exists k0 ≥ 2 such that
Nτ = NT −δ , ∀τ ∈ [T − δ, T ).
A∗ φ = λφ.
Since Nτ ⊂ ker B ∗ , we also have φ ∈ ker B ∗ and this is a contradiction with the Fattorini-
Hautus test (51).
Remark B.1. Let us stress that the end of our proof differs from the one in [8, Section 2.2].
Indeed, in this reference, the conclusion of the proof relied on the fact that the semigroup is
nilpotent, that is
This readily implies that the operator A∗ has no eigenvalues and this is how the authors conclude
that NT = {0}. On the other hand, in our proof above, we only made use of the Fattorini-Hautus
705
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
test (51) (which is trivially checked if the operator A∗ has no eigenvalues). Besides, this is op-
timal, in the sense that this test is always a necessary condition for the system (A, B) to be null
controllable in some time.
Finally, let us add that for the example of the hyperbolic system (, −, Q, G) the correspond-
ing adjoint semigroup is not always nilpotent. Notably, the strictly lower triangular structure of
G−− was used at the end of [8, Section 2.2] to prove such a property.
References
[1] Farid Ammar-Khodja, Yacine Mokhtari, Boundary controllability of two coupled wave equations with space-time
first-order coupling in 1-D, J. Evol. Equ. 22 (2022) 31.
[2] Georges Bastin, Jean-Michel Coron, Stability and Boundary Stabilization of 1-D Hyperbolic Systems, Progress in
Nonlinear Differential Equations and Their Applications, vol. 88, Birkhäuser/Springer, Cham, 2016, Subseries in
Control.
[3] Pavol Brunovský, A classification of linear controllable systems, Kybernetika 6 (1970) 173–188.
[4] Jean-Michel Coron, Long Hu, Guillaume Olive, Peipei Shang, Boundary stabilization in finite time of one-
dimensional linear hyperbolic balance laws with coefficients depending on time and space, J. Differ. Equ. 271
(2021) 1109–1170.
[5] Jean-Michel Coron, Hoai-Minh Nguyen, Optimal time for the controllability of linear hyperbolic systems in one-
dimensional space, SIAM J. Control Optim. 57 (2) (2019) 1127–1156.
[6] Jean-Michel Coron, Hoai-Minh Nguyen, Finite-time stabilization in optimal time of homogeneous quasilinear hy-
perbolic systems in one dimensional space, ESAIM Control Optim. Calc. Var. 26 (2020) 119.
[7] Jean-Michel Coron, Hoai-Minh Nguyen, Lyapunov functions and finite time stabilization in optimal time for homo-
geneous linear and quasilinear hyperbolic systems, preprint, https://arxiv.org/abs/2007.04104, 2020.
[8] Jean-Michel Coron, Hoai-Minh Nguyen, Null-controllability of linear hyperbolic systems in one dimensional space,
Syst. Control Lett. 148 (2021) 104851.
[9] Jean-Michel Coron, Hoai-Minh Nguyen, On the optimal controllability time for linear hyperbolic systems with
time-dependent coefficients, preprint, https://arxiv.org/abs/2103.02653, 2021.
[10] Jean-Michel Coron, Control and Nonlinearity, Mathematical Surveys and Monographs, vol. 136, American Mathe-
matical Society, Providence, RI, 2007.
[11] Jean-Michel Coron, Rafael Vazquez, Miroslav Krstic, Georges Bastin, Local exponential H 2 stabilization of a 2 × 2
quasilinear hyperbolic system using backstepping, SIAM J. Control Optim. 51 (3) (2013) 2005–2035.
[12] Froilán M. Dopico, Charles R. Johnson, Juan M. Molera, Multiple LU factorizations of a singular matrix, Linear
Algebra Appl. 419 (1) (2006) 24–36.
[13] Michel Duprez, Guillaume Olive, Compact perturbations of controlled systems, Math. Control Relat. Fields 8 (2018)
397–410.
[14] Philip Hartman, Ordinary Differential Equations, Classics in Applied Mathematics, vol. 38, Society for Indus-
trial and Applied Mathematics (SIAM), Philadelphia, PA, 2002, Corrected reprint of the second (1982) edition
[Birkhäuser, Boston, MA, MR0658490 (83e:34002)], with a foreword by Peter Bates.
[15] Long Hu, Florent Di Meglio, Finite-time backstepping boundary stabilization of 3 × 3 hyperbolic systems, in:
Proceedings of the European Control Conference (ECC), July 2015, pp. 67–72.
[16] Long Hu, Florent Di Meglio, Rafael Vazquez, Miroslav Krstic, Control of homodirectional and general heterodirec-
tional linear coupled hyperbolic PDEs, IEEE Trans. Autom. Control 61 (11) (2016) 3301–3314.
[17] Long Hu, Guillaume Olive, Minimal time for the exact controllability of one-dimensional first-order linear hyper-
bolic systems by one-sided boundary controls, J. Math. Pures Appl. (9) 148 (2021) 24–74.
[18] Long Hu, Guillaume Olive, Null controllability and finite-time stabilization in minimal time of one-dimensional
first-order 2 × 2 linear hyperbolic systems, ESAIM Control Optim. Calc. Var. 27 (2021) 96.
[19] Harry Hochstadt, Integral Equations, Pure and Applied Mathematics, John Wiley & Sons, New York-London-
Sydney, 1973.
[20] Long Hu, Sharp time estimates for exact boundary controllability of quasilinear hyperbolic systems, SIAM J. Con-
trol Optim. 53 (6) (2015) 3383–3410.
[21] Long Hu, Rafael Vazquez, Florent Di Meglio, Miroslav Krstic, Boundary exponential stabilization of 1-dimensional
inhomogeneous quasi-linear hyperbolic systems, SIAM J. Control Optim. 57 (2) (2019) 963–998.
706
L. Hu and G. Olive Journal of Differential Equations 336 (2022) 654–707
[22] Tatsien Li, Controllability and Observability for Quasilinear Hyperbolic Systems, AIMS Series on Applied Mathe-
matics, vol. 3, American Institute of Mathematical Sciences (AIMS)/Higher Education Press, Springfield, MO/Bei-
jing, 2010.
[23] Tatsien Li, Bopeng Rao, Strong (weak) exact controllability and strong (weak) exact observability for quasilinear
hyperbolic systems, Chin. Ann. Math., Ser. B 31 (5) (2010) 723–742.
[24] David L. Russell, On boundary-value controllability of linear symmetric hyperbolic systems, in: Mathematical
Theory of Control, Proc. Conf., Los Angeles, Calif., 1967, Academic Press, New York, 1967, pp. 312–321.
[25] David L. Russell, Controllability and stabilizability theory for linear partial differential equations: recent progress
and open questions, SIAM Rev. 20 (4) (1978) 639–739.
[26] Norbert Weck, A remark on controllability for symmetric hyperbolic systems in one space dimension, SIAM J.
Control Optim. 20 (1) (1982) 1–8.
707