Linear Complementarity Systems: W.P.M.H. Heemels J.M. Schumacher S. Weiland
Linear Complementarity Systems: W.P.M.H. Heemels J.M. Schumacher S. Weiland
Abstract
We introduce a new class of dynamical systems that we call \linear complementarity sys-
tems". The evolution of these systems typically consists of a series of continuous phases
separated by \events" which cause a change in dynamics and possibly a jump in the state
vector. The occurrence of events is governed by certain inequalities similar to those appearing
in the Linear Complementarity Problem of mathematical programming. The framework we
describe is suitable for certain situations in which both di erential equations and inequalities
play a role, for instance in mechanics, electrical networks, and dynamic optimization. We
present a precise de nition of linear complementarity systems and give sucient conditions
for existence and uniqueness of solutions.
1 Introduction
In many technical and economic applications one encounters systems of di erential equations
and inequalities. For a quick roundup of examples, one may think of the following: motion
of rigid bodies subject to unilateral constraints, electrical networks with ideal diodes, optimal
control problems with inequality constraints in the states and/or controls, dynamic versions
of linear and nonlinear programming problems, and dynamic Walrasian economies. It has to
be noted that there is considerable inherent complexity in systems of di erential equations and
inequalities, since nonsmooth trajectories and possibly even jumps have to be taken into account;
Dept. of Electrical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven,
The Netherlands, and CentER and Dept. of Economics, Tilburg University, E-mail: [email protected]
y CWI, P.O. Box 94079, 1090 GB Amsterdam, The Netherlands, and CentER and Dept. of Economics, Tilburg
University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands, E-mail: [email protected]
z Dept. of Electrical Engineering, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven,
The Netherlands, E-mail: [email protected]
1
as a result of this, even basic issues such as existence and uniqueness of solutions are dicult
to settle. Given the wealth of possible applications however, it is of interest to overcome these
diculties.
In the literature one can nd several lines of research dealing with dynamics subject to inequality
constraints, some mainly motivated by problems in mechanics, others more closely connected to
operations research and economics. One way in which di erential equations and inequalities can
be combined is by means of di erential inclusions, see [1] and the references given there. By their
nature, di erential inclusions usually have nonunique solutions; in this paper however we shall
be interested in systems that are like ODEs in the sense that typically one will have uniqueness
of solutions. A formulation of this type is provided by the \sweeping process" of Moreau [17]
(see also [16] and [3]), which is mainly geared towards applications in mechanics. In a more
economically oriented context, Dupuis and Nagurney [8] and Nagurney and Zhang [18] have
recently discussed so-called \projected dynamical systems" for which they prove existence and
uniqueness results. In [9] Filippov studies di erential equations with discontinuous righthand
sides. Van Bokhoven [2] has given a formulation for electrical networks with ideal diodes that
ts within the description that will be used below; he mainly discusses equilibria and only brie y
touches upon solving the equations in time.
The framework that we shall present in this paper is more general than the one discussed by
Moreau and co-workers in that we do not constrain ourselves to mechanical systems; on the other
hand, we shall discuss only (piecewise) linear systems whereas Moreau considers fully nonlinear
systems. The limitation to linear dynamics is introduced here mainly because it simpli es the
discussion of jump phenomena. It will be proven below that the formulation that we give agrees
with the one given by Moreau within the class of systems to which both formulations apply.
The solution concept proposed by Dupuis and Nagurney is such that no jumps can occur in
state trajectories and so in general their formulation is di erent from ours. In the terminology
of mechanics, we study inelastic rather than elastic collisions in this paper, since we want to
allow transitions from constrained to unconstrained modes and vice versa. Elastic collisions
bring their own problems in existence and uniqueness of solutions, see for instance [22] and [19].
The present paper continues a line of research begun in [20], where existence and uniqueness
results were given for the case of systems with a single inequality constraint. The main result
of this paper will be to give sucient conditions for local existence and uniqueness of solutions
for systems with several inequality constraints. We do this under a formulation of the mode
transition rule that is di erent (for the multiconstrained case) than the one used in [20]. It seems
to be dicult to obtain well-posedness results for the multiconstrained case using the rule of
[20]; moreover, this rule is not consistent with Moreau's rule in the case of mechanical systems.
Obtaining well-posedness results for Moreau's sweeping process in the multiconstrained case
is mentioned as an open problem by Monteiro Marques [16, p. 126]. Of course, this problem
is posed in a general nonlinear setting and is certainly not completely solved here, since we
consider only systems that have linear dynamics in each mode. Dupuis and Nagurney [8] prove
global existence and uniqueness of solutions (under appropriate Lipschitz conditions) for systems
with several inequality constraints. However, as noted above, the dynamics they consider is in
general di erent from ours. To illustrate the di erence, note that Dupuis and Nagurney also
2
obtain continuous dependence of the solutions on initial conditions, which is a property that
in our context need not hold (see Example 36 below). Filippov [9] presents existence and
uniqueness results for di erential equations with discontinuous righthand sides. In particular,
the righthand side is assumed to be piecewise continuous with a countable number of domains of
continuity with nonempty interior (separated by surfaces). The main di erence with the work
presented in this paper is that we deal with inequalities that are not allowed to be violated.
Speci cation of the domains (called \mode selection" in our set-up) is a nontrivial task and
has to be accomplished before formulating the dynamics of the linear complementarity systems
as di erential equations with discontinuous righthand sides. However, this results in domains
of continuity (\modes") that have empty interior, which is di erent from the assumptions in
Filippov's work. Furthermore, solutions in [9] are required to be absolutely continuous, while
our solutions may contain jumps in order not to violate the inequalities. Continuous dependence
of the solutions on initial conditions holds for systems considered in [9], but in general does not
apply for linear complementarity systems.
This paper can be viewed as a continuation of the work of Lotstedt [14] who pioneered the
application of the Linear Complementarity Problem (LCP) of mathematical programming to
the simulation of the motion of systems of rigid bodies subject to unilateral constraints. There
is some change of direction however, since we consider (piecewise) linear systems rather than
(nonlinear) mechanical systems and aim for a complete speci cation of the system dynamics.
Such a speci cation was not given by Lotstedt; in particular he does not precisely specify what
trajectories should be chosen in case multiple constraints become active at the same time. One
of the main objectives in this paper is to give a complete de nition of the dynamics of linear
complementarity systems, in a form that is suitable for simulation purposes.
The mode-switching behaviour that we study in this paper may also be looked at from a much
more general viewpoint as an interaction of di erential equations and switching rules. Sys-
tems in which continuous dynamics and discrete transition rules are connected to each other
are sometimes called \hybrid systems"; these occur for instance when a discrete device, such
as a computer program, interacts with a part of the outside world that has its own continu-
ous dynamics, such as a chemical process. Hybrid systems have recently drawn considerable
attention both from computer scientists and from control theorists, see for instance [15]. In this
literature, existence and uniqueness of solutions is often simply assumed, and easily veri able
sucient conditions for well-posedness in other than trivial cases are rarely given. The work
presented in this paper may be seen as a contribution towards lling this gap.
The paper is organised as follows. We start with an example to motivate the de nitions that
will be given later. After having dealt with some mathematical preliminaries in Section 3, a
formal de nition of the class of linear complementarity systems with the corresponding solution
concept is given in Section 4. Mode selection techniques are presented in Section 5. Sucient
conditions for local existence and uniqueness of solutions follow in Section 6. After that, we
present a computational example to illustrate that our de nition is suitable as a basis for the
actual simulation of linear complementarity systems. In section 8, we establish the connection
with the sweeping process formulation of Moreau. Finally, conclusions follow in Section 9.
In this paper, the following notational conventions will be in force. IR denotes the real numbers,
3
IR+ the nonnegative real numbers, IR := IR+ [ f1g and IN := f0; 1; 2; : : : g. For a positive
integer l, l denotes the set f1; 2; : : : ; lg. If a is a (column) vector with k real components, we
write a 2 IRk and denote the ith component by ai . M 2 IRmn means that M is a real matrix
with size m n. M > is the transpose of the matrix M . The kernel of M is denoted by Ker
M and the image by Im M . Given M 2 IRkl and two subsets I k and J l, the (I; J )-
submatrix of M is de ned as MIJ := (mij )i2I;j 2J . In case J = l, we also write MI and if I = k,
we write MJ . For a vector a, aI := aI = (ai )i2I . diag(a1 ; : : : ; ak ) denotes the diagonal matrix
A 2 IRkk with diagonal entries a1 ; : : : ; ak . Given two vectors a 2 IRk and b 2 IRl , then col(a; b)
denotes the vector in IRk+l that arises from stacking a over b.
A rational matrix is a matrix with entries in the eld IR(s) of rational functions in one variable.
A rational matrix is called proper, if for all entries the degree of the numerator is smaller than
or equal to the degree of the denominator. A rational matrix is called biproper, if it is square,
proper and has an inverse, that is proper too.
The set C 1 (IR; IR) denotes the set of functions from IR to IR that are arbitrarily often di eren-
tiable.
A vector u 2 IRk is called nonnegative, and we write u > 0, if ui > 0; i 2 k and positive (u > 0),
if ui > 0; i 2 k. If a vector u is not nonnegative, we write u 6> 0. A sequence of scalars is called
lexicographically nonnegative, written as (u1 ; u2 ; : : : ; uk ) 0, if (u1 ; u2 ; : : : ; uk ) = (0; 0; : : : ; 0)
or uj > 0 where j := minfp 2 k j up 6= 0g. A sequence of scalars is called lexicographically posi-
tive, denoted by (u1 ; u2 ; : : : ; uk ) 0, if (u1 ; u2 ; : : : ; uk ) 0 and (u1 ; u2 ; : : : ; uk ) 6= (0; 0; : : : ; 0).
For a sequence of vectors, we write (u1 ; u2 ; : : : ; uk ) 0 when (u1i ; u2i ; : : : ; uki ) 0 for all i.
Likewise, we write (u1 ; u2 ; : : : ; uk ) 0 when (u1i ; u2i ; : : : ; uki ) 0 for all i.
For sets A and B, A n B := fx 2 A j x 62 Bg and P (A) denotes the power set of A, i.e. the
collection of all subsets of A. For two subspaces V; T of IRn , the notation V T = IRn means
that V and T form a direct sum decomposition of IRn , i.e. V + T := fv + t j v 2 V; t 2 T g = IRn
and V \ T = f0g.
2 Example
Before we give a formal description of the class of systems under study, we will illustrate some
of its hybrid dynamical aspects by considering the example of two carts connected by a spring
(used also in [20]). The left cart is attached to a wall by a spring. The motion of the left cart
is constrained by a completely inelastic stop. The system is depicted in gure 1.
For simplicity, the masses of the carts and the spring constants are set to 1. The stop is placed
in the equilibrium position of the left cart. We now address the question how to model such
a system. By x1 ; x2 we denote the deviation of the left and right cart, respectively, from their
equilibrium positions and x3 ; x4 are the velocities of the left and right cart, respectively. By u,
we denote the reaction force exerted by the stop. Furthermore, we set y equal to x1 . By using
4
Figure 1: Two-carts system.
simple mechanical laws, we deduce the following dynamical relations for this system.
x_ 1 (t) = x3 (t)
x_ 2 (t) = x4 (t)
x_ 3 (t) = ?2x1 (t) + x2(t) + u(t) (1)
x_ 4 (t) = x1 (t) ? x2 (t)
y(t) := x1 (t)
To model the stop in this setting, we reason as follows. The variable y should be nonnegative,
because it is the position of the left cart with respect to the stop. The force exerted by the stop
can only act in the positive direction, so that also u should be nonnegative. If the left cart is
not at the stop at time t (y(t) > 0), the reaction force vanishes at those moments, i.e. u(t) = 0.
Similarly, if u(t) > 0, the cart must necessarily be at the stop, i.e. y(t) = 0. This is expressed
by
y(t) > 0; u(t) > 0; y(t)u(t) = 0: (2)
The system has two modes, depending on whether the stop is active or not. We distinguish
between the unconstrained mode (u(t) = 0) and the constrained mode (y(t) = 0). The dynamics
of these modes are given by the Di erential and Algebraic Equations (DAEs)
unconstrained constrained
x_ 1 (t) = x3 (t) x_ 1 (t) = x3 (t)
x_ 2 (t) = x4 (t) x_ 2 (t) = x4 (t)
x_ 3 (t) = ?2x1 (t) + x2 (t) x_ 3 (t) = ?2x1 (t) + x2(t) + u(t)
x_ 4 (t) = x1 (t) + x2 (t) x_ 4 (t) = x1 (t) + x2 (t)
u(t) = 0 y(t) = x1 (t) = 0:
When the system is in either of these modes, the triple (u; x; y) is given by the corresponding
dynamics as long as the remaining inequalities in (2)
unconstrained constrained
y(t) > 0 u(t) > 0
are satis ed. A mode change is triggered by violation of one of these inequalities. For this
example, the following mode transitions are possible.
5
Unconstrained ! Constrained: y(t) > 0 tends to get violated at a time instant t = t0.
The left cart hits the stop and stays there. The velocity of the left cart is reduced to
zero instantaneously at the time of impact: the kinetic energy of the left cart is totally
absorbed by the stop due to a purely inelastic collision. A state for which this happens is
for instance x(t0 ) = (0; ?1; ?1; 0)> .
Constrained ! Unconstrained: u(t) > 0 tends to be violated at t = t0. The right cart
is located at or moving to the right of its equilibrium position, so the spring between the
carts is stretched and pulls the left cart away from the stop. This happens for example if
x(t0 ) = (0; 0; 0; 1)> .
Unconstrained ! Unconstrained with re-initialisation according to Constrained
mode. y(t) > 0 tends to get violated at t = t0. As an example, consider x(t0 ) =
(0; 1; ?1; 0)> . At the time of impact, the velocity of the left cart is put to zero just
as in the rst case. Hence, a state jump or re-initialisation to (0; 1; 0; 0)> occurs. The
right cart is to the right of its equilibrium position and pulls the left cart away from the
stop. Stated di erently, from (0; 1; 0; 0)> smooth continuation in the unconstrained mode
is possible.
This last transition is a special one in the sense that rst the constrained mode is active caus-
ing the corresponding state jump. After the jump no smooth continuation is possible in the
constrained mode resulting in a second mode change back to the unconstrained mode.
From state x(t0 ) = (0; ?1; ?1; 0)> , we can enter the constrained mode by starting with an
instantaneous jump to x(t0 +) = (0; ?1; 0; 0)> . This jump is caused by a (Dirac) pulse exerted
by the stop. In fact, u = results in the state jump x(t0 +) ? x(t0) = (0; 0; 1; 0)> . This motivates
the usage of distributional theory as a feasible mathematical framework to describe physical
phenomena like jumps and collisions.
To summarise, the motion of the carts is governed by a pair of Di erential and Algebraic equa-
tions (DAEs), called the constrained and unconstrained mode. A change of mode is triggered by
violation of certain inequalities corresponding to the current mode. The time instants at which
this occurs, are called \event times," and one problem is to detect the instances that these
events happen. At an event time, the system will switch to a new mode. A mode transition
often calls for a state jump or re-initialisation. In the example, we observed velocity jumps,
when the left cart arrived at the stop with negative velocity. In this paper, the above dynamics
will be formalised for the complete class of linear complementarity systems and special attention
is paid to the mode selection problem.
3 Mathematical Preliminaries
We consider a linear input-output system x_ (t) = Kx(t) + Lu(t); y(t) = Mx(t) + Nu(t). The
time arguments will often be suppressed. Throughout this section, x(t) 2 IRn ; u(t) 2 IRm and
y(t) 2 IRr . The system parameters K , L, M and N are constant matrices of corresponding
dimensions.
6
The set of distributions de ned on IR with support on [0; 1) is denoted by D+0 . For more details
on distributions, we refer to [23]. Particular examples of elements of D+0 are the -distribution
and its derivatives. We denote convolution by juxtaposition like ordinary multiplication and
denote the delta distribution by and its r-th derivative by (r) . Linear combinations of these
particular distributions will be called impulsive distributions
P , that is, a function u 2 D+0 is an
impulsive distribution, if it can be written as u = li=0 u?l (l) . A special subclass of D+0 is the
set of regular distributions in D+0 . These are distributions that are smooth on [0; 1). Formally,
a function u 2 D+0 is smooth on [0; 1), if a function v 2 C 1 (IR; IR) exists such that
0 (t < 0)
u(t) = v(t) (t > 0):
De nition 1 An impulsive-smooth distribution is a distribution u 2 D0 of the form u = +
uimp + ureg , where uimp is impulsive and ureg is smooth on [0; 1). The class of these distributions
is denoted by Cimp .
In [11], it is shown that the solution (xx ;u; yx ;u) exists, is unique in D+0(n+r) and belongs to
n+r . The solution is given by
Cimp
0 0
Xl Xi
xx ;u =
0 K i?j Lu?i (j?1) + xreg (5)
|i
=1 j =1
{z
ximp
}
with
eKt (x + Pl K iLu?i) + R t eK t? Lu ( )d (t > 0)
( )
xreg (t) = 0 i =0 reg0 (6)
0 (t < 0)
7
and
Xl Xi i?j ?i (j ?1)
Xl ?i (i)
yx0 ;u = MK Lu + Du + yreg (7)
i=1 j =1 i=0
| {z }
yimp
with
yreg (t) = Mxreg (t) + Nureg (t); t 2 IR: (8)
If it is clear which x0 and u are meant, we omit these subscripts in xx ;u and yx ;u.
0 0
Note that the jump xx ;u(0+) ? x0 of the state at time 0 only depends on the impulsive part of
0
the input u. Furthermore, observe that
xreg (x0 ; u) = xxx ;u(0+);ureg ; yreg (x0 ; u) = yxx ;u(0+);ureg :
0 0
(10)
We now consider the system (4) under the additional condition that the output y is zero.
De nition 3 A state x is said to be consistent for (K; L; M; N ), if there exists a regular input
0
u such that
x_ = Kx + Lu + x0 (11)
0 = Mx + Nu
is satis ed. The set of all consistent states for (K; L; M; N ) is denoted by V (K; L; M; N ) and
called the consistent subspace.
Let Tx (K; L; M; N ) be the set of possible jumps from initial state x0 caused by impulsive-smooth
0
inputs u, that result in regular outputs yx ;u. Formally,
0
8
converges in maximally n steps to T (K; L; M; N ) [11].
Note that V (K; L; M; N )+ T (K; L; M; N ) is the set of states x0 for which there exists a u 2 Cimp
m
such that (11) holds (see [11, Prop. 3.23]).
De nition 4 The quadruple (K; L; M; N ) is called autonomous, if for every consistent state x 0
there exists exactly one smooth solution (u(); x()) to (11) with x(0) = x . 0
One can show that (K; L; M; N ) is autonomous is i for all x0 2 V (K; L; M; N )+ T (K; L; M; N )
there exists exactly one u 2 Cimp
m such that (11) is satis ed [11].
Lemma 5 Consider the system (K; L; M; N ) and suppose that the number of inputs (m) equals
the number of outputs (r). Then the following statements are equivalent.
1. (K; L; M; N ) is autonomous
L
2. V (K; L; M; N ) T (K; L; M; N ) = IRn and Ker N = f0g
3. G(s) := M (sI ? K )?1 L + N is invertible as a rational matrix.
After these preliminaries from linear systems theory, we now recall some notions from math-
ematical programming that we shall use. The Linear Complementarity Problem (LCP) [4] is
de ned as follows.
Given a matrix M 2 IRkk and q 2 IRk , nd w; z 2 IRk such that
w = q + Mz (15)
w > 0; z > 0 (16)
>
z w = 0 (17)
or show that no such z; w exist. We denote this problem by LCP(q; M ).
9
Let a matrix M of size k k and two subsets I and J of k of the same cardinality be given.
The (I; J )-minor of M is the determinant of the square matrix MIJ := (mij )i2I;j 2J . The (I; I )-
minors are also known as the principal minors. M is called a P-matrix, if all principal minors are
(strictly) positive. A matrix M is said to be positive de nite, if x>Mx > 0 for all x 2 IRn n f0g.
Note that a positive de nite matrix is not necessarily symmetric according to this de nition.
We state the following results.
Theorem 6 For given M , the problem LCP(q; M ) has a unique solution for all vectors q if and
only if M is a P-matrix.
10
or equivalently,
8 x_ (t)
>
< 0 = Ax(t) + BI uI (t)
= CI x(t) + DII uI (t) (20)
>
: uyII cc ((tt)) = CI c x(t) + DI cI uI (t)
= 0:
The set of consistent states for mode I , denoted by VI , equals V (A; BI ; CI c ; DII ). The jump
space is given by TI := T (A; BI ; CI ; DII ). The set of initial states for which an impulsive-
smooth input exists such that (19) is satis ed in the distributional sense is VI + TI .
We call mode I autonomous, if the quadruple (A; BI ; CI ; DII ) is autonomous. A standing
assumption in the remainder of this paper will be the following one.
Assumption 8 All modes are autonomous.
By Lemma 5 this is equivalent to saying that GII (s) := CI (sI ? A)?1 BI + DII is invertible
for each subset I k. Note that GII (s) is indeed the (I; I )-submatrix of the rational matrix
G(s) := C (sI ? A)?1 B + D. By the same lemma, Assumption 8 implies VI TI = IRn for all
I k.
Under Assumption 8, (19) has a unique impulsive-smooth solution for all individual modes given
an arbitrary initial state.
The computation of this ow or solution in mode I for a consistent state of this mode is called
DAE simulation. From [11, Thm. 3.10], it follows that the input satisfying a DAE of the form
(11) can be represented by a linear state feedback. Substituting this feedback in (19) transforms
the DAE into an ordinary di erential equation (ODE). Hence, the regular part of a solution u
satisfying (19) for some initial state is a Bohl function, i.e. a function of the form
0 (t < 0)
u(t) = FeGt v (t > 0) (21)
for real matrices F; G and a vector v depending on the initial state and the speci c mode I .
11
4.2 Re-initialisation
If x0 2 VI , the corresponding solution (u(; x0 ; I ); x(; x0 ; I ); y(; x0 ; I )) in mode I is regular.
If x0 62 VI , then a re-initialisation of the initial state will be necessary. Indeed, if x0 62 VI ,
then the solution to (19) calls for a non-regular input u(; x0 ; I ) 2 Cimp k , i.e. an input u with
nonzero impulsive part. This impulsive part results in an instantaneous jump or re-initialisation
to x(0+) as in (9). By de nition, such impulsive inputs cause jumps along TI . Using (10), we
get yx(0+);ureg (;x ;I ) = yreg (; x0 ; I ), i.e. the output of the system (A; B; C; D) with initial state
x(0+) and input ureg (; x0 ; I ) equals yreg (; x0 ; I ). Hence, looking at the components j 2 I , we
0
get yx(0+);ureg (;x ;I );j = yreg;j (; x0 ; I ) = 0; j 2 I . In words, this equation states that the output
of the system (A; BI ; CI ; DII ) with initial state x(0+) and input ureg;I (; x0 ; I ) is equal to zero.
0
But this means that x(0+) is a consistent state for mode I , i.e. x(0+) 2 VI . Summarizing: we
have by de nition of TI that x(0+) ? x0 2 TI and x(0+) 2 VI . Since VI TI = IRn , the jump
along TI from x0 to x(0+) 2 VI can be done in only one way. The re-initialised vector x(0+) is
the projection of x0 onto VI along TI . The projection operator is denoted by PVTII .
Example 12 From initial state x = (0; 1; ?1; 0)> rst a state jump occurs governed by the
0
laws of the constrained mode, but no regular continuation is possible in the constrained mode.
Solving the dynamics corresponding to the constrained mode, i.e. (19) with I = f1g, gives
u(t) = ? cos t. Although (22) is not satis ed on a positive time interval, incorporation of this
solution in the de nition of initial solutions as well seems well-motivated on physical grounds.
1. there exists an I k such that (u; x; y) satis es (19) on [0; 1) with initial state x0 in the
distributional sense; and
2. u; y are initially nonnegative.
By the fact that the parameters in (18) are constant, it is sucient to consider only initial time
zero.
Given a state x0 , we de ne S (x0 ) by
S (x ) := fI P (k) j there exists an initial solution (u; x; y) to (18) that
0
13
4.5 Solution concept
De nition 15 A solution to (18) on [0; Te ); Te > 0 with initial state x consists of a 6-tuple
0
(D; ; xe ; xc ; uc ; yc ) where D is either f0; : : : ; N g for some N > 0 or IN ,
: D ! IR
xe : D ! IRn
xc : (0; Te ) n (D) ! IRn
uc : (0; Te ) n (D) ! IRk
yc : (0; Te ) n (D) ! IRk ;
that satis es the following.
Note that (uc (t); xc (t); yc (t)) is smooth on an interval (a; b) (0; Te ) with (a; b) \ (D) = ?.
The above de nition describes how the solution is built up by concatenation of initial solutions.
A solution can be constructed by using ow chart 2 and the following description. Taking x0
as the initial state starts the procedure. The state x0 is presented to the mode selection block
resulting in a selected mode. If mode I is selected, there are two possibilities indicated by the
question in the decision block:
1. From the state x0 smooth continuation is possible in the selected mode I 2 S (x0 ), i.e.
x0 2 VI (answer is \Yes"). Go to the DAE simulation with this initial state and mode I .
2. No smooth continuation is possible in the selected mode I from x0 (answer is \No"), i.e.
x0 62 VI : The right arrow leads to the re-initialisation block performing the projection
along TI onto VI . The re-initialised state is returned to the mode-selection block. After
solving the mode selection problem, the same two possibilities have to be considered again.
If we arrive in a state where the answer to the question in the decision block is \Yes," the DAE
simulation block leads to a smooth part of the solution until an event time event is reached.
The state x0 is set to xc(event ?) and again given to the mode selection block. Next, the whole
cycle starts again.
14
DAE-simulation
Event-detection
x0:=xc(tevent-)
x0:=PTVx0
x0
Mode-selection
Re-initialisation
Is smooth
Yes continuation
possible in the No
selected mode without
re-initialisation ?
The construction does not lead to a solution on [0; Te ), if we end up in a state x~ with S (~x) = ?
at time < Te (deadlock), or we end up in an in nite loop, where only re-initialisations and
mode selections occur without smooth continuation.
Before presenting conditions on the complementarity system to guarantee the existence and
uniqueness of solutions, we have to introduce two algebraic mode selection procedures.
Proof. Evident. 2
Let (u; x; y) be an initial solution to (18) with initial state x0 . The Laplace transforms of u; y,
denoted by u^; y^ are rational and satisfy
y^(s) = C (sI ? A)?1 x0 + (C (sI ? A)?1 B + D)^u(s) and y^>(s)^u(s) = 0 (24)
for all s 2 IR and
y^(s) > 0; u^(s) > 0 (25)
for all s 2 IR larger than some s0 2 IR+ . This is even an if-and-only-if statement: the Laplace
transforms are rational and satisfy (24)-(25) i the corresponding time functions de ne an initial
solution of (18). Indeed, since y^i and u^i are rational, they have only a nite number of zeros or
are identically zero. Hence, since their product y^i (s)^ui (s) vanishes for s > s0 at least one of the
two factors has to be identically zero, implying that Item 1: in De nition 14 holds.
We formulate the Rational Complementarity Problem (nomenclature introduced in [21]):
If (^u; y^) is a solution to RCP(x0 ), any mode J satisfying u^J c (s) = 0 and y^J (s) = 0; for all s 2 IR
is a a mode for which an initial solution exists satisfying (19) for I = J . Such modes may hence
be selected as continuation modes.
Remark 17 The new mode corresponding to a given solution of RCP(x ) is not unique. Indeed,
0
de ne
I := fi 2 k j u^i (s) > 0 for almost all s > s0g: (28)
16
Then I is the set of indices i for which ui as a time function (inverse Laplace transform of u^i )
is initially positive1 and consequently, yi is zero. Hence, an initial solution to (18) satisfying
(19) for mode I exists. Consider now the \undetermined index set"
K := fi 2 k j u^i(s) = 0 and y^i (s) = 0 for all sg:
Any mode I J I [ K may also legally be selected. However, the solution to (19) in mode
J with initial state x0 is the same solution as for mode I . This follows from the fact that the
inverse Laplace transform u; y of u^; y^ satis es (19) for mode I and mode J . Since all modes are
assumed to be autonomous, u; y is the only solution for both modes. Hence, solving (19) for I or
J leads to the same triple (u; x; y). Summarizing, given a solution of the RCP(x0), the freedom
in the choice of the modes corresponding to this solution is exactly described by K . Moreover,
all choices lead to the same (u; x; y).
The set of modes I that can be selected, or equivalently, the set of modes for which an initial
solution exists satisfying (20) for mode I is de ned as SRCP (x0 ). More speci cally,
SRCP(x0 ) = fI P (k) j 9(^u; y^) solution to RCP(x0 ) such that^uI c (s) = 0; y^I (s) = 0; for all sg:
(29)
This completes the formulation of the RCP as a mode selection method. By using the power
series expansion of the solution to RCP(x0 ), we will now derive an alternative mode selection
method.
If (^u; y^) is a solution to RCP(x0 ), then it necessarily has to satisfy u^I c = 0, y^I = 0 for some
I k. Consequently,
0 = RI (s)x0 + GII (s)^uI (s)
y^I c (s) = RI c (s)x0 + GI cI (s)^uI (s);
where G(s) is the proper transfer function C (sI ? A)?1 B + D and R(s) is the strictly proper
rational matrix C (sI ? A)?1 . Note that GII (s) is invertible by Assumption 8. This implies that
u^I (s) = ?G?II1(s)RI (s)x0 and
y^I c (s) = RI c (s)x0 ? GI cI (s)G?II1 (s)RI (s)x0 :
The maximal degree of the polynomial part of G?II1 (s) is n, because the underlying state space
dimension is n. Hence, the maximal degree of the polynomial part of the rational functions
u^I (s) and y^I c (s) is n ? 1. So, for initial solutions we only have to consider polynomial parts with
at most degree n ? 1, or equivalently, derivatives of the Dirac function up to order n ? 1.
In terms of the power series expansion of y^(s) around in nity,
X
1
y^(s) = yi s?i ; (30)
i=?n+1
1
We call an impulsive-smooth distribution u initially positive, if u is initially nonnegative and additionally if
ui is regular, then for some " > 0 ui (t) > 0; t 2 (0; ").
17
y^(s) is nonnegative for all suciently large real s if and only if
(y?n+1 ; y?n+2 ; : : : ) 0: (31)
Given the system description (A; B; C; D), the Markov parameters of the system are de ned by
(
H i = D; if i = 0
(32)
i i?
H = CA B; ifi = 1; 2; : : :
1
Note that
X
1
G(s) = H i s?i : (33)
i=0
Using the power series expansions of y^ and u^ and (33), we can reformulate RCP(x0 ) as the
Linear Dynamic Complementarity Problem (nomenclature introduced in [21]) by considering
the coecients corresponding to equal powers of s.
Xi
yi = H i?j uj ; if ? n + 1 6 i 6 0 (34a)
j =?n+1
Xi
yi = CAi?1 x0 + H i?j uj ; if 1 6 i 6 (34b)
j =?n+1
are satis ed, and for all indices i 2 k at least one of the following is true:
(yi?n+1 ; yi?n+2 ; : : : ; yi ) = 0 and (ui?n+1 ; ui?n+2 ; : : : ; ui ) 0 (35)
(yi?n+1 ; yi?n+2 ; : : : ; yi ) 0 and (ui?n+1 ; ui?n+2 ; : : : ; ui ) = 0 (36)
LDCP1 (x0 ) denotes the problem of nding a solution (u?n+1 ; u?n+2 ; : : : ) and (y?n+1 ; y?n+2 ; : : : )
that satis es LDCP(x0 ) for all > ?n + 1 (or showing that no such solution exists).
If (u?n+1 ; u?n+2 ; : : : ) and (y?n+1 ; y?n+2 ; : : : ) is a solution to LDCP1 (x0 ), then modes J sat-
isfying (ui?n+1 ; ui?n+2 ; : : : ) = 0; i 2 J c , (yi?n+1 ; yi?n+2 ; : : : ) = 0; i 2 J are candidates for
selection.
18
The complete set of candidates for selection, denoted by SLDCP
(x0 ), is de ned by
S
LDCP (x0 ) := fI 2 P (k ) j 9(uj )j=?n+1 ; (yj )j=?n+1 solution to LDCP(x0 ) such that
(35) holds for i 2 I and (36) holds for i 2 I c g:
In some cases, it suces to consider LDCPn (x0 ) instead of LDCP1(x0 ) (see Theorem 30 be-
low). In [7], it has been shown that LDCP (x0 ) is a special case of the Generalized Linear
Complementarity Problem [5] and the Extended Linear Complementarity Problem [6]. In [5],
an algorithm is proposed to nd all solutions to GLCP.
Theorem 18 The following statements are equivalent.
1. The equations (18) have an initial solution for initial state x0 .
2. RCP(x0 ) has a solution.
3. LDCP1(x0 ) has a solution.
Proof. From the derivation of RCP, it follows that 1. and are equivalent. If (^u; y^) is a
2.
solution to RCP(x0 ), then the coecients of the power series expansion of this solution around
in nity form a solution to LDCP1 (x0 ). Hence, implies . To see that implies 1:, suppose
2. 3 3.
show that x(0+) 2 VI . To this end, note that yIi = 0 and uiI c = 0 for all i. From (34b), it follows
that x(0+) satis es
0 = yI1 = CI x(0+) + DII v0
0 = yI2 = CI Ax(0+) + DII v1 + CI BI v0
.. .. ...
. . (37)
0 = yI = CI A?1 x(0+) + DII v?1 + CI BI v?2 + : : : + CI A?2 BI v0
.. .. ..
. . .
with vi = uiI+1 , i > 0. Combining P algorithm (12) and the equations above, it follows that
l ?
for l > 0 the states A x(0+) + i=0 Ai BI vl?1?i belongs to Vj ; j > 0 for (A; BI ; CI ; DII )
l 1
and so in particular for l = 0, x(0+) 2 lim V j = VI . Hence, there exists a regular solution
(ureg (); xreg (); yreg ()) to (19) for mode I with initial state x(0+).
By di erentiating (19) in time and evaluating the resulting equalities at time instant 0, we see
i) (0), i = 0; 1; : : : satis es (37) as well. To show that this implies that v~i = vi for
that v~i := u(reg;I
all i, interpret both sequences as inputs of the discrete time system
x(l + 1) = Ax(l) + BI vl ; yI (l) = CI x(l) + DII vl ; l = 0; 1; 2; : : :
with initial state x(0+) and output yI (l). Then by (37), yI (l) = 0 for all l > 0 and the di erence
wi := vi ? v~i is an input that keeps the output of the discrete time system with initial state 0
19
identically zero. We introduce the z -transform
X
1
w(z) := wi z?i :
i=0
Using the z -transform GII (z ) of the transfer function of the discrete time system (see e.g. [13]),
we get 0 = GII (z )w(z ). The invertibility of GII (z ) implies that w(z ) = 0 and hence, vi = v~i ,
i) (0), i > 0. This also implies that yi+1 = y(i) (0); i > 0.
i > 0, or equivalently, uiI+1 = u(reg;I reg
We de ne u := in=0
P P P
?1 u?i (i) + u , x := n?1 i Ai?j Bu (j ?1) + x , y := n?1 yi (i?1) + P
reg i=1 j =1 i reg i=1
yreg . Obviously, (u; x; y) satis es in De nition 14. We only have to show that in De ni-
1. 2.
Proof. This follows from the proof above. The second statement is a result of the one-to-one
correspondence. 2
shall use the transformations between LDCP1 (x0 ), RCP(x0 ) and initial solutions frequently.
Note that the above proof also yields an alternative way of deriving the LDCP: di erentiate the
initial solution with incorporation of the impulsive part and evaluate the results at time instant
zero. For smooth continuations this method can be directly generalized to the nonlinear case as
in [14, 21].
6 Well-posedness results
There exist linear complementarity systems, for which no solution exists from certain initial
conditions (due to deadlock or in nitely many jumps without smooth continuation) or for which
the solution is not unique (see [20]).
De nition 21 The complementarity system (18) is (locally) well-posed if from each initial state
there exists an " > 0 such that a unique solution on [0; ") in the sense of De nition 15 exists.
20
This de nition can be reformulated as follows. The system is well-posed if and only if from
each state there exists a unique solution on an interval of positive length starting with at most
a nite number of jumps followed by smooth continuation on that interval.
De nition 22 Let (A; B; C; D) be a system with Markov parameters H i, i = 0; 1; 2; : : : . The
leading column indices ; : : : ; k of (A; B; C; D) are de ned for j 2 k as
1
j := inf fi 2 IN j Hij =6 0g
with the convention inf ? = 1. The leading row indices ; : : : ; k of the linear system (A; B; C; D)
1
are de ned for j 2 k as
j := inf fi 2 IN j Hji =
6 0g:
Since we consider only invertible transfer functions (see Assumption 8 and Lemma 5), the leading
row and column indices are all nite. Due to the Cayley-Hamilton theorem, we even have i 6 n
and i 6 n. The leading row coecient matrix M (A; B; C; D) and leading column coecient
matrix N (A; B; C; D) for the system (A; B; C; D) are de ned as
0 H 1 1
M (A; B; C; D) := B
@ ... C
1
21
Proof. From Lemma 5, it is sucient to show that GII (s) is invertible for all I k. For
notational convenience, we assume I = l, l 2 k. If M has only nonzero principal minors, then
MII is invertible. Hence, GII (s) = diag(s? ; : : : ; s?l )V (s) where V (s) is a biproper matrix,
1
because V (1) = MII is invertible [10, Thm. 4.5]. The reasoning proceeds analogously for N . 2
De nition 27 A state x is called regular for mode I if x 2 VI and the corresponding smooth
0 0
solution in this mode satis es (22) for a time interval of positive length. A state is called regular
if it is regular for at least one mode of the system.
A state x0 is regular if and only if RCP(x0 ) has a strictly proper solution. Or equivalently, x0
is regular if and only if LDCP1(x0 ) has a solution with u?n+1 = : : : = u0 = 0.
Theorem 28 Suppose that the leading row coecient matrix M is a P-matrix. Then x 2 IRn 0
is a regular state if and only if for all i 2 k
(Ci x ; Ci Ax ; : : : ; Ci Ai ? x ) 0:
0 0
1
0 (39)
Moreover, the smooth continuation is unique.
and
0 y p 1 1+
0
1 1
(41)
ykk +p
for linear functions p , p > 1. We denote by L(l), l 2 IN the problem of nding a solution
(u1 ; : : : ; ul ), fyji j j = 1; : : : ; k; i = 1; : : : ; j + lg to (40) and (41), p = 1; 2; : : : ; l together with
the requirement that for all indices i 2 k at least one of the following statements is true:
(yi1 ; yi2 ; : : : ; yii +l ) = 0 and (u1i ; u2i ; : : : ; uli ) 0 (42)
(yi1 ; yi2 ; : : : ; yii +l ) 0 and (u1i ; u2i ; : : : ; uli ) = 0: (43)
22
Note that L(l) is a subproblem of LDCP1 (x0 ) and that if we nd a solution (y1 ; y2 ; : : : ),
(u1 ; u2 ; : : : ) satisfying L(l) for all l > 0, then this solution is a solution to the corresponding
LDCP1 (x0 ).
We claim that L(l) has a unique solution uj , j = 1; : : : ; l ? 1 and yij , i 2 k, j = 1; : : : ; i + l ? 1,
for all l > 0. Note that this holds for l = 0. We will prove this by induction, similarly as in
[14, 21].
We write Il , Jl , Kl for the active (input) index set, the inactive index set and the undecided
index set, respectively, determined by L(l). Formally, for l > 1, Il = fi 2 k j (u1i ; : : : ; uli ) 0g,
Jl = fi 2 k j (yi1 ; : : : ; yii+l ) 0g and Kl = k n (Il [ Jl ) with yij , i = 1; : : : ; k; j = 1; : : : ; i + l
and ui , i = 1; : : : ; l determined by L(l). For convenience we also de ne I0 := ?; J0 = fi 2 k j
(yi1 ; : : : ; yii ) 0g and K0 = k n J0 .
Note that L(l ? 1) is a subproblem of L(l), so variables uniquely determined by L(l ? 1) are
automatically uniquely speci ed for L(l). As a consequence, Il?1 , Jl?1 , Kl?1 are determined as
well. Comparing L(l) with L(l ? 1), we observe that L(l) has one additional equation: (41) for
p = l. We divide this equation into the three parts given by Il?1 , Jl?1 and Kl?1 . For notational
convenience, we omit all indices depending on l and all superscripts:
0 y 1 0 z 1 0 M M M 10 u 1
@ yJ A = @ zJI A + @ MJIII MJJIJ MJK
I IK I
A @ uJ A (44)
yK zK MKI MKJ MKK uK
with z = l (x0 ; u1 ; : : : ; ul?1 ). From the de nition of Il?1 , Jl?1 and Kl?1 , we get yI = 0 and
uJ = 0. By substituting this result in (44), we obtain
0 = zI + MII uI + MIK uK (45)
yJ = zJ + MJI uI + MJK uK (46)
yK = zK + MKI uI + MKK uK : (47)
Since MII is a principal submatrix of a P-matrix, it is invertible and hence we get from (45)
that uI = ?MII?1 (zI + MIK uk ). Substituting this expression in (47) leads to
yK = zK ? MKI MII?1 zI + (MKK ? MKI MII?1 MIK )uK (48)
Due to (42) and (43) and the de nition of Kl?1 , the complementarity conditions
uK > 0; yK > 0; yK> uK = 0 (49)
hold. So, (48) and (49) constitute an LCP. Since MKK ? MKI MII?1 MIK is a Schur complement
of a P-matrix, it follows from Proposition 2.3.5 in [4] that it is a P-matrix as well. Theorem 6
states that the corresponding LCP has a unique solution. From uK we can compute uI and yJ .
Hence, the induction hypothesis has been proven for l. So we nd a solution of LDCP1(x0 )
with u?n+1 = : : : = u0 = 0, y?n+1 = : : : = y0 = 0. Since this solution is unique, the one-
to-one correspondence between initial solutions and solutions of LDCP1 (x0 ) implies that the
corresponding smooth initial solution is unique. 2
23
Theorem 29 If the leading column coecient matrix N is a P-matrix, then for every state x 0
j
LDCP (x ), > 0 has a solution that is unique except for ui , i 2 k, j = ? i + 1; : : : ; ,
0
which are left undetermined. Furthermore, ui?n = ui?n = : : : = u?i i = 0, i 2 k and
+1 +2
y?n+1 = : : : = y0 = 0.
Proof. The proof is based on separation of the equalities (34) in two parts, (34a) and (34b),
providing the equations for yi , i = ?n + 1; : : : ; 0 and yi , i = 1; : : : ; , respectively. For both
parts we start an induction that is analogous to the previous proof: we reduce the LDCP to
a series of LCP's which can be solved uniquely. To do so, we constitute successive LCP's by
selecting certain equations from (34). This is done in such a way that only principal submatrices
of the leading column coecient matrix N appear in these LCP's.
We introduce the index sets Oj := fi 2 k j i = j g, j = 0; 1; : : : ; n and Sj := ji=0 Oi ,
S
j = 0; 1; : : : ; n. In words, the j -th Markov parameter is the rst Markov parameter in which
the j -th column is nonzero. Oj is the set of indices i for which the i-th column in the sequence
of Markov parameters (H 0 ; H 1 ; : : : ) is nonzero for the rst time in H j . Sj is the set of indices
i for which at least one of the sequence of columns (Hi0 ; Hi1 ; : : : ; Hij ) is nonzero. As remarked
before, i 6 n. Hence, Sn = k. By de nition, Hi Sjc = 0, i 6 j .
By suitable permutation of rows and columns, we get the existence of 0 = k0 6 k1 6 k2 6
: : : kn 6 kn+1 = k such that Oj = fkj + 1; : : : ; kj+1 g, j = 0; 1; : : : ; n. Then
N = [H0O H1O : : : HnO?n1 ]:
0 1
24
i 2 S0 results in the LCP
yS?n+1 = HS0 S uS?n+1 = NS S uS?n+1
0 0 0 0 0 0 0
(53), y?n+1 = 0 follows immediately. This proves the rst step of our induction.
0 0
Suppose that the induction hypothesis above holds for r ? 1, where 2 6 r 6 n. Since
LDCP?n+r?1 (x0 ) is a subproblem of LDCP?n+r (x0 ), we consider only the additional equal-
ity in (34):
y?n+r = H 0 u?n+r + H 1 u?n+r?1 + : : : + H r?1 u?n+1
= H0S uS?n+r + H1S uS?n+r?1 + : : : + HrS?r1? uS?rn?+1
0 0 1 1 1 1
The second equality follows from HSi ic = 0, the third one follows from the induction hypothesis
(50). The last equality is a consequence of Sj n Sj ?1 = Oj . Since uS?rcn?+1 ; uS?rcn?+2 ; : : : ; uS?cn+r do 1 2 0
not appear in this additional equation, these variables remain undetermined.
Equation (54) consists of k scalar equations. Considering only the equalities for yi?n+r , i 2 Sr?1 ,
we nd
0 u?n r + 1
B u ?On r? 0
C
HSr? O : : : HSr?r? Or? B C
+ 1
O
yS?rn?+r =
1
HS0r? O 1 0
1
1 1
B
@ ...
1
1 1
1
C
A
uO?rn? +1
0 u?n r 1 +
1
O
B
B uO?n r? C C
0
+ 1
= NSr ? Sr ? 1 1 B
@ . C .
.
1
A:
u?n +1
| O{zvr? } 1
=: ?r
This is the LCP we are looking for. Since NSr? Sr? (as a submatrix of N ) is also a P-matrix, the
above LCP has a unique solution (Theorem 6). Hence, this solution must be v?r = yS?rn?+r = 0.
1 1
Using this in (54) shows that y?n+r = 0. In combination with the induction hypothesis for r ? 1,
1
this yields the hypothesis for r. This completes our induction step and hence the proof of our
rst claim.
25
To complete the proof, we start a second induction with hypothesis as stated in the formulation
of the theorem. Note that this is equivalent to saying: LDCP (x0 ) has a unique solution for
every state x0 , only uS c ; uS?c 1 ; : : : ; uS?nc ?n+1 are left undetermined. For = 0 this hypothesis is
true, for it follows from the previous induction by taking r = n. Suppose the hypothesis is true
0 1 1
for ? 1; > 1. Since LDCP?1 (x0 ) is a subproblem of LDCP (x0 ), the variables uS?1 , : : : ,
uS?n?n , u?n?1, : : : , u?n+1 are already uniquely determined. We set
0
In comparison with LDCP?1 (x0 ), LDCP (x0 ) has the additional equality
0 u 1
O
B u? 0
C
)+NB C
1
O
y = (x ?1 ?2 ?n ?n?1 ; : : :
0 ; uS0 ; uS1 ; : : : ; uSn?1 ; u ; u?n+1 B
@ ...
1
C
A
uO?n?n+1
1
for some function . Splitting this equation into three parts according to the index sets
I; J; K , we can follow the same reasoning as in the proof of Theorem 28 to conclude that
y ; uO ; uO?1 ; : : : ; uO?n?n+1 are uniquely determined and thus prove the induction hypothesis for
. 2
0 1 1
Since the impulsive part is unique, the re-initialisation is unique and results in x(0+) := x0 +
P n?1 i ?i and (36) imply that (y1 ; y2 ; : : : ; yn ) 0.
i=0 A Bu . The complementarity conditions (35)
The right hand side of (34) contains for yi1 ; : : : ; yi i , i 2 k only coecients corresponding to the
impulsive part, i.e. only u0 ; : : : ; u?n+1 . Hence, observe that (Ci x(0+); : : : ; Ci Ai ?1 x(0+)) =
(yi1 ; : : : ; yii ) 0; i 2 k. According to lemma 28, x(0+) is a regular state. So after at most one
re-initialisation, (unique) smooth continuation is guaranteed. 2
The next theorem states that in case N is a P-matrix, it is sucient to consider LDCPn (x0 )
(instead of LDCP1 (x0 )) to select a mode. Hence, only an algebraic problem with a nite number
of constraints has to be solved to ful l the mode selection criterion that has been proposed above.
Obviously, LDCPn (x0 ) is to be preferred over LDCP1 (x0 ) from a computational point of view.
26
Theorem 30 If the leading column coecient matrix N is a P-matrix, then from every ini-
tial state there exists a unique initial solution to (18). This solution evolves in mode I where
I := fi 2 k j (ui?n+1 ; ui?n+2 ; : : : ; uin?i ) 0g with the vectors fuj g constituting a solution to
LDCPn (x0 ).
Proof. Let (y?n ; y?n ; : : : ; yn) and (u?n ; u?n ; : : : ; un ) be a solution to LDCPn(x ) and
+1 +2 +1 +2
0
let I be de nedPas in the formulation of the theorem. The state after re-initialisation x(0+) is
de ned by x0 + in=0 ?1 Ai Bu?i. The jump is induced by the impulsive input u = Pn?1 u?i (i) .
imp i=0
It follows from the de nition of I that (ui?n+1 ; : : : ; uin?i ) = 0, i 2 I c , and in combination with
(35), (36) the same de nition yields (yi?n+1 ; : : : ; yin ) = 0, i 2 I . Using (34b), we conclude that
x(0+) satis es
0 = yI1 = CI x(0+) + DII v1
0 = yI2 = CI Ax(0+) + DII v2 + CI BI v1
.. .. ... (55)
. .
0 = yIn = CI An?1 x(0+) + DII vn + CI BI vn?1 + : : : + CI An?2 BI v1 :
with vi = uiI . By using (12) and the equations above, we can show that x(0+) 2 Vj , j =
0; 1; 2; : : : ; n for (A; BI ; CI ; DII ) and so x(0+) 2 lim Vj = VI . Hence, there exists a regular
solution (ureg (); xreg (); yreg ()) to (19) in mode I with initial state x(0+). We de ne
X
n? 1
27
for some proper function p(z ), where w(z ) denotes the Laplace transform of w. For notational
simplicity, we set I = l, l 2 k. Since NII is a P-matrix (and hence invertible), GII can be written
as
GII (z) = V2 (z)diag(z ? ; : : : ; z ?l )
1
(58)
with V2 biproper, because V2 (1) = NII is invertible (Theorem 4.5 in [10]). Hence, (57) yields
w(z) = G?II1 (z)p(z) = diag(z ? ?n ; : : : ; z ?l ?n )~p(z);
1
i) (0) = ui+1 ,
where p~(z ) = V2?1 (z )p(z ) is proper. The de nition of wi now implies that u(reg;j j
j 2 I , i = 0; 1; : : : ; n ? i ? 1.
Since for j 2 I ,
(n?i ?1)
(uj?n+1 ; : : : ; u0j ; u(0)
reg;j (0); : : : ; ureg;j (0)) = (uj?n+1 ; : : : ; ujn?i ) 0
the distribution u~j 2 Cimp is initially positive. Note that y~I = 0 by construction of y~. For
j 2 I c , u~j = 0 by de nition. Note that
(y?n+1 ; : : : ; y0 ; yreg
(0) (n?1)
; : : : ; yreg ) = (y?n+1 ; : : : ; yn ) 0
due to the equality between u(reg i?1) and ui . Hence, if (y?n+1 ; : : : ; yn ) 0, then y~ 2 C
i i i imp is
initially positive. For j 2 I , it can also happen that (yj?n+1 ; : : : ; yjn ) = 0; however, this implies
c
that y~j is identically zero. To see this, note that yreg;I c can be written as the output of the
system
x_ = (A + BI F )x
yreg;I c = (CI c + DI cI F )x;
because the input u satisfying (19) can be given in feedback form u = Fx (see section 4). By
the Cayley-Hamilton theorem and because the underlying state space dimension of the system
is equal to n, (yi?n+1 ; : : : ; yin ) = 0 implies (yi?n+1 ; yi?n+2 ; : : : ) = 0. Since yreg;i is real-analytic
(it is even a Bohl function) y~i = yreg;i 2 Cimp is identically zero. Hence, (~u; x~; y~) is an initial
solution to (18).
Uniqueness follows from the fact that that LDCP1 (x0 ) has a unique solution (Theorem 29).
Indeed, the one-to-one correspondence between initial solutions and solutions to LDCP1(x0 )
implies that there is only one initial solution, which must be given by the above mode. 2
Remark 31 Since LDCP1(x ) has a unique solution, it necessarily has the same solution as
0
selected by LDCPn (x0 ) as indicated in the above theorem. The equivalence between LDCP1(x0 )
and RCP(x0 ) shows that RCP(x0 ) also selects the correct mode.
Remark 32 Solving the LDCPn(x ) can be simpli ed by using Theorem
0 29. This theorem states
that the variables y?n ; y?n ; : : : ; y and ui?n ; ui?n ; : : : ; u?i i , i 2 k can immediately be
+1 +2 0 +1 +2
set to zero.
28
Remark 33 Note that it is not claimed in the proof that the regular part of (~u; y~) is initially
nonnegative; actually this may not be true as shown in the example below. In such cases the
initial solution constructed in the proof just serves as a re-initialisation.
7 Computational example
In this section, we illustrate the computation of trajectories of the example of section 2 by means
of the ow chart 2. Suppose that the initial state equals
xe(0) = x0 = (0:3202; ?0:4335; 0:3716; ?1:0915)> :
Presenting this state to the mode selection block will result in selection of the unconstrained
mode (I (0) = ?), because the distance to the stop is strictly positive. We will show how to go
through the ow chart 2.
DAE simulation Since the unconstrained dynamics is speci ed by an ordinary di erential
equation (ODE), a solution can be computed by an ODE solver.
Event detection At time t = 1, we arrive at state (0; ?1; ?1; 0)> , which is not regular for
the unconstrained mode. Note that y(1) = 0, y_ (1) < 0, so continuing in the unconstrained
mode would violate the inequality constraint y(t) > 0. So (1) = 1 is an event time and
xe(1) = (0; ?1; ?1; 0)> . We have to select a new mode.
Mode selection Transforming the dynamical system to the Laplace domain, leads to
0x 1
? B x C
10
only solution to RCP(xe (1)), so SRCP (xe (1)) = ff1gg. Hence, the constrained mode must be
selected (I (1) = f1g). Since the solution to RCP(xe (1)) is not strictly proper, the answer to the
question in the decision block is negative, so we have to re-initialise.
29
Re-initialisation Using (12) and (14, we can compute the consistent states and the jump space:
00 1
1 00 0
1
B0
Tf g = Im B 0 CC ; V = Ker 1 0 0 0 = Im B
B 1 0 C
C
1 @1 0 A fg 1
0010 @0 0 A:
0 0 0 1
To re-initialise we have to project xe (1) onto Vf1g along Tf1g , which results in
x(1+) = PVTff gg xe(1) = (0; ?1; 0; 0)> :
1
1
which is strictly proper. Hence, the question in the decision block is answered positively and we
can go to the DAE simulation. The physical interpretation is clear: the left cart hits the stop.
Instantaneously, the velocity is put to zero and the right cart keeps the left cart pushed against
the stop.
DAE simulation The dynamics of the constrained mode is given by a set of DAEs. However,
these can easily be translated into an ODE. The input u must be chosen in such a way, that it
keeps y identically zero. Since y = x1 , y_ = x3 , y = 2x1 + x2 + u, u should equal ?2x1 ? x2 .
Hence, the dynamics is given by x1 = x3 = 0, x2 = ?x2 , u = ?x2 . Incorporating x(1+) as new
initial condition, we get x2 (t) = ? cos(t ? 1), u(t) = cos(t ? 1) for t in an interval starting at 1.
Note that we could also have concluded this by taking the inverse Laplace transform of u^ in the
previous mode selection. We can continue in this mode as long as u(t) > 0.
Event detection An event is detected at (2) = inf ft > 1 j cos(t ? 1) < 0g = 1 + =
1 + (x(1+); f1g): The corresponding event state is xe (2) = (0; 0; 0; 1)> . Again we have to select
2
a new mode.
Mode selection This time, LDCP will be demonstrated as a mode selection method. Since
the conditions of Theorem 30 are satis ed, we can use LDCP4 (xe (2)) for mode selection:
y?3 = 0
y?2 = 0
y?1 = u?3
y0 = u?2
y1 = u?1 ? 2u?3
y2 = u0 ? 2u?2 + u?3
y3 = u1 ? 2u?1 + u?2 + 3u?3
y4 = 1 + u2 ? 2u0 + u?1 + 3u?2 ? 3u?3 ;
30
with complementarity conditions (35) and (36). Setting yi = 0, i 2 f?3; : : : ; 4g leads to
(u?3 ; : : : ; u1 ; u2 ) = (0; : : : ; 0; ?1) 0. Hence, (35) does not hold. It is obvious that setting
ui = 0, i 2 f?3; : : : ; 4g leads to (y?3 ; : : : ; y3 ; y4 ) = (0; : : : ; 0; 1) 0 implying that (36) holds.
Hence, SLDCP 4
(xe (2)) = f?g and the unconstrained mode must be selected (I (2) = ?).
Since the impulsive part of u is zero, i.e. u?3 = u?2 = u?1 = u0 = 0, the arrow marked \yes" in
the ow chart 2 must be followed and leads to the DAE simulation. This could also be observed
from the fact that (0; 0; 0; 1)> is a consistent state for the unconstrained mode. In terms of
the physical system: the right cart was on the right of its equilibrium and pulled the left cart
away from the stop. The simulated trajectory is plotted in gure 3. Note the complementarity
between u and x1 and the discontinuity in the derivative of x1 at time t = 1.
1.5
1
u
x_1
0.5
−0.5
−1 x_2
−1.5
0 0.5 1 1.5 2 2.5 3 3.5
time
To consider the special case of section 2, we take the initial state xe (0) = x0 = (0; 1; ?1; 0)> .
Substituting this initial condition in (59) results in
(s4 + 3s2 + 1)^y (s) = s ? s2 ? 1 + (s2 + 1)^u(s):
Solving RCP(x0 ) leads to y^(s) = 0 and u^(s) = 1 ? s s+1 and so SRCP (x0 ) = ff1gg. We select
2
solution of RCP(x0 ). This is not a valid choice. The solution is u^(s) = 0 and y^(s) = (s +3ss +1) ,
4 2
31
which corresponds to the unconstrained mode. Since the constrained mode cannot be chosen,
we get xe (1) = x(0+). Since the solution of RCP(x(0+)) is strictly proper, the decision block
question is answered positively.
8 Mechanical Systems
In this section, we show that the mode selection rule that we propose coincides with the one
proposed by Moreau [16, 17] when both rules are applied to the class of systems that are covered
by both our and Moreau's framework, to wit, linear mechanical systems.
We will focus on linear mechanical systems whose dynamics in free motion is given by the
di erential equations
M q + Dq_ + Kq = 0; (60)
where q denotes the vector of generalized coordinates. M denotes the generalized mass matrix,
which is assumed to be positive de nite, D denotes the damping matrix and K the elasticity
matrix. The system is subject to frictionless unilateral constraints given by
Fq > 0 (61)
with F of full row rank. Furthermore, we assume that impacts are purely inelastic.
To obtain a complementarity formulation, we introduce the constraint forces u needed to satisfy
the unilateral constraints, and the state vector x = col(q; q_). According to the rules of classical
mechanics, the system can then be written as follows
0 I
0
x_ = x+ u (62a)
| ?M K {z?M D }
? ? ? >
| M {zF }
1 1 1
A B
y = (|F{z0)} x (62b)
C
y > 0; u > 0; y>u = 0: (62c)
This systems satis es i = i = 2, i 2 k; note that M (A; B; C; D) = N (A; B; C; D) = FM ?1 F >
is positive de nite and hence a P-matrix (Theorem 7).
We consider only initial states x0 = col(q0 ; q_0 ) with Fq0 > 0. We call these points feasible. In
the two-carts system, this means that we do not consider initial states for which the left cart
starts on the left of the stop. In Moreau's sweeping process (see [16, 17]) no jumps occur in
q itself, but jumps can occur in the velocities q_. These jumps are governed by the following
minimization problem, where J := fi 2 k j Fi q0 = 0g.
Minimization Problem 34 Let an initial state x0 = col(q0 ; q_0) be given. The new state after
re-initialization, denoted by x(0+) = col(q(0+); q_(0+)), is determined by
q(0+) = q0
q_(0+) = arg min 1 (w ? q_ )>M (w ? q_ ):
fwjFiw>0; i2J g 2 0 0
32
Note that the minimization problem has a unique solution. The problem re ects a kind of
\principle of economy": among the kinematically admissible right velocities, the nearest one
is chosen in the kinetic metric [16, p. 75]. Observe that if we have proven that jumps in our
formulation correspond to the above minimization problem, then it follows that the feasible set
fx 2 IRn j Cx > 0g is invariant under the dynamics as introduced in section 4.
The Kuhn-Tucker conditions [12] for the minimization problem give necessary conditions for op-
timality. The vector q_(0+) is the minimizing argument only if there exists a Lagrange multiplier
such that
M (q_(0+) ? q_0 ) ? FJ> = 0 (63)
According to Theorem 6, this LCP has a unique solution, because FJ M ?1 FJ> is a P-matrix.
Since the minimization problem (34) is convex, the Kuhn-Tucker conditions are even sucient
for optimality. Hence, the formulated LCP is equivalent to the minimization problem for deter-
mining the jumps. Notice that once this LCP is solved, the required jumps are known, because
q_(0+) follows from (65).
We will prove now that LDCPn (x0 ) (and hence also LDCP1 (x0 ) and RCP(x0 )) are equivalent
to the optimization problem in the sense that both methods result in the same jumps of state
and select the same mode.
Theorem 35 For linear mechanical systems of the form (62) with M positive de nite and F
of full row rank, the re-initialisation by means of LDCPn (x0 ) (or LDCP1 (x0 ) or RCP(x0 ))
agrees with Moreau's sweeping process [16],[17] for feasible initial states. Linear mechanical
complementarity systems are well-posed.
Proof. Since the row coecient matrix and the column coecient matrix are P-matrices, well-
posedness follows from Theorem 23. Furthermore, Theorem 29 states that u?2 = u?3 = : : : =
u?n = 0. Because we start from a feasible state x0 , it follows from the proof of this theorem
that even u?1 = 0. Indeed, the next LCP in the series is
y1 = Cx0 + CABu?1
33
Figure 4: Two-carts system with hook.
with the corresponding complementarity conditions. Since this LCP has a unique solution, the
solution must satisfy u?1 = 0. Hence, y?n+1 = y?n+2 = : : : = y0 = 0 and y1 = Cx0. The next
relevant equality in (34) is
y2 = CAx0 + CABu0: (68)
We de ne J := fi 2 k j Ci x0 = 0g. Since one of the expressions (35) or (36) has to be satis ed
for i 2 J , the conditions
yi2 > 0; u0i > 0; yi2u0i = 0; i 2 J
have to hold. Because yi1 > 0 for elements i 2 J c , 0 = u0i = u1i = : : : = uni must hold to satisfy
(36). Considering only i 2 J , we can write down the LCP following from (68) and the above
complementarity conditions:
yJ2 = CJ Ax0 + CJ ABJ u0J (69)
2 > 0
yJ > 0; uJ > 0; (yJ ) uJ = 0:
2 0
(70)
This LCP is identical to the LCP (66) and (67). This shows that the re-initialisation by means
of LDCPn (x0 ) leads to the same result as the minimization problem (34). 2
From this proof, we see that for feasible initial states only proper rational solutions to RCP
occur, i.e. only jumps take place along Im B .
Example 36 To illustrate the mode selection and re-initialisation, we consider the two-carts
system as in section 2, but this time with an additional hook. See gure 4.
The complementarity description is given by
x_ 1 (t) = x3(t)
x_ 2 (t) = x4(t)
x_ 3 (t) = ?2x1 (t) + x2 (t) + u1 (t) + u2 (t)
x_ 4 (t) = x1(t) ? x2 (t) ? u2 (t)
y1 (t) := x1(t)
y2 (t) := x1(t) ? x2 (t)
34
where u1 ; u2 denote the reaction forces exerted by the stop and hook, respectively. These
equations are completed by the complementarity conditions (18c). Note that
1 0 0 0 2 ?1 1 0
M = 0 1 ; D = 0 0 ; K = ?1 1 ; F = 1 ?1 (71)
leads to a description as in the beginning of this section.
Using the minimization problem to determine the re-initialisation and mode selection in case
of an initial state (x10 ; x20 ; x30 ; x40 )> with x10 = x20 = 0 results in the alternatives as shown
in gure 5. Note that the minimization problem consists of nding the minimal distance to
the feasible set (area indicated by \unconstrained.") The arrows denote the re-initialisation
directions.
x4
hook-constrained
hook/stop
x3
unconstrained
stop-constrained
Entering the stop-constrained mode is only valid if for suciently large s the above two ex-
pressions are nonnegative. This requires x30 6 0 and x40 6 0. This indeed corresponds to the
35
indicated area for the stop-constrained mode in gure 5. Note that the polynomial parts of u1
and u2 equal ?x30 and 0 respectively. Hence, uimp = (?x30 ; 0)> . According to (9), the state
jump equals B (?x30 0)> = (0; 0; ?x30 ; 0)> . This agrees with the direction of the arrows in
gure 5. In the same way, the other modes can be veri ed.
This example shows also that the mode selection procedure as mentioned in [20] does not agree
with Moreau's sweeping process. It is proposed there that if I is the current mode and violation
of (22) occurs at time in state x( ), the new mode is given by
J := (I n ?2 ) [ ?1 ;
where
?1 := fi 2 I c j yi (t; x( ); I ) < 0; t 2 (; + ") for some " > 0g
In the example, this means that if we are in the unconstrained mode (I = ?) and we arrive in
x( ) = (0; 0; ?1; 2)> , the selected mode should be J = f1; 2g, the hook/stop constrained mode.
This does not agree with the minimization problem illustrated in gure 5, which indicates the
hook-constrained mode. A physical argument against the proposal of [20] might be that removing
the stop does not lead to violation of y1 (t) > 0.
Another phenomenon that may be illustrated in the above example is that the solutions of
linear complementarity systems do not always depend continuously on the initial state. This
discontinuous dependence is caused by the sensitivity of the solutions to the order in which
constraints become active. Consider the initial states x0 (") = ("; "; ?2; 1)> ; " > 0. For " = 0
the solution is a jump to (0; 0; 0; 0)> , after which the system stays in its equilibrium position.
For " > 0, rst the hook becomes active, resulting in a jump to("; "; ? 12 ; ? 21 )> . This is followed
by a regular continuation in the hook-constrained mode until the left cart hits the stop. The
state just before the impact is (0; 0; ? 12 + g("); ? 21 + g("))> for some continuous function g(")
with g(0) = 0. Re-initialisation yields the new state (0; 0; 0; ? 12 + g("))> , which converges to
(0; 0; 0; 12 )> if " # 0. Obviously, the system has a discontinuity in (0; 0; ?2; 1)> . One may
also note that the sequence of initial states x0 (") = (0; ?"; ?2; 1), " > 0 leads after two re-
initialisations for " # 0 to the limit state (0; 0; 12 ; 12 ). This alternative limit corresponds to a
situation where rst the stop-constrained and then the hook-constrained mode occurs.
9 Conclusions
In this paper we studied linear complementarity systems. Constraints allowing a complemen-
tarity description occur in a natural way in many physical systems. The basic characteristic of
these systems is the interconnection of continuous dynamics and discrete transition rules. As
36
such, these systems can be seen as \hybrid systems" involving both continuous and discrete
dynamics. A description of the complete dynamics of linear complementarity systems has been
proposed. Our description is based on an explicit notion of \mode" or \discrete state." We
have shown the equivalence of several methods of carrying out the crucial mode selection step,
which connects continuous states to discrete states. We focused on questions of existence and
uniqueness of solutions of linear complementarity systems. A notion of well-posedness has been
introduced, which formalizes the idea that from all states smooth continuation is possible after
at most a nite number of jumps. Well-posedness is guaranteed whenever the leading column
coecient matrix and the leading row coecient matrix associated with the state space repre-
sentation of the system are P-matrices. In particular, this result implies that linear mechanical
system with unilateral constraints are well-posed. We showed that the description of solutions
produces the same jump rule as in Moreau's sweeping process. The framework proposed here
is well-suited for the numerical computation of trajectories of complementarity systems, as is
illustrated by various examples.
References
[1] J.P. Aubin and A. Cellina. Di erential Inclusions. Springer, Berlin, 1984.
[2] W.M.G. van Bokhoven. Piecewise Linear Modelling and Analysis. Kluwer, Deventer, the
Netherlands, 1981.
[3] B. Brogliato. Nonsmooth Impact Mechanics. Models, Dynamics and Control, volume 220
of Lecture Notes in Control and Information Sciences. Springer, London, 1996.
[4] R.W. Cottle, J.-S. Pang, and R.E. Stone. The Linear Complementarity Problem. Academic
Press, Inc., Boston, 1992.
[5] B. De Moor, L. Vandenberghe, and J. Vandewalle. The generalized linear complementarity
problem and an algorithm to nd all solutions. IEEE Transactions on Circuits and Systems,
57:415{426, 1992.
[6] B. De Schutter and B. De Moor. The extended linear complementarity problem. Mathe-
matical Programming, 71:289{325, 1995.
[7] B. De Schutter and B. De Moor. The linear dynamic complementarity problem is a special
case of the extended linear complementarity problem. Technical Report TR 97-21, KU
Leuven, Dept. ESAT-SISTA, 1997,
URL: ftp:// ftp.esat.kuleuven.ac.be/pub/SISTA/deschutter/reports/97-21.ps.gz.
[8] P. Dupuis and A. Nagurney. Dynamical systems and variational inequalities. Annals of
Operations Research, 44:9{42, 1993.
[9] A.F. Filippov. Di erential Equations with Discontinuous Righthand Sides. Mathematics
and Its Applications. Kluwer, Dordrecht, The Netherlands, 1988.
[10] M.L.J. Hautus. The formal Laplace transform for smooth linear systems. Mathematical
Systems Theory, Proc. Int. Symp. Lecture Notes in Economics and Mathematical Systems,
131:29{47, 1975.
37
[11] M.L.J. Hautus and L.M. Silverman. System structure and singular control. Linear Algebra
and its Applications, 50:369{402, 1983.
[12] H.W. Kuhn and A.W. Tucker. Nonlinear programming. Proceedings of 2nd Berkeley Sym-
posium on Mathematical Statistics and Probability, pages 481{492, 1951.
[13] B.C. Kuo. Automatic Control Systems. Prentice-Hall, Inc., Englewood Cli s, New Jersey,
1987.
[14] P. Lotstedt. Mechanical systems of rigid bodies subject to unilateral constraints. SIAM
Journal on Applied Mathematics, 42(2):281{296, 1982.
[15] O. Maler, editor. Hybrid and Real-Time Systems. (Proc. Intern. Workshop HART'97,
Grenoble, France, March 1997.), volume 1201 of Lecture Notes in Computer Science, Berlin,
1997. Springer.
[16] M.D.P. Monteiro Marques. Di erential Inclusions in Nonsmooth Mechanical Problems.
Shocks and Dry Friction. Progress in Nonlinear Di erential Equations and their Applica-
tions. Birkhauser, Basel, 1993.
[17] J.J. Moreau. Unilateral contact and dry friction in nite freedom dynamics. Nonsmooth
Mechanics and Applications, CISM Courses and Lectures, 302:1{82, 1988.
[18] A. Nagurney and D. Zhang. Projected Dynamical Systems and Variational Inequalities with
Applications. Kluwer, Boston, 1996.
[19] D. Percivale. Uniqueness in the elastic bounce problem. Journal of Di erential Equations,
56:206{215, 1985.
[20] A.J. van der Schaft and J.M. Schumacher. The complementary-slackness class of hybrid
systems. Mathematics of control, signals and systems, 9:266{301, 1996.
[21] A.J. van der Schaft and J.M. Schumacher. Complementarity modelling of hybrid systems.
Technical Report BS-R9611, CWI, Amsterdam, 1996,
URL: http:// www.cwi.nl/ftp/CWIreports/BS/BS-R9611.ps.Z.
[22] M. Schatzman. A class of nonlinear di erential equations of second order in time. Nonlinear
Analysis, Theory, Methods & Applications, 2(3):355{373, 1978.
[23] L. Schwartz. Theorie des Distributions. Hermann, Paris, 1951.
38