0% found this document useful (0 votes)
25 views

17 - Discrete Time Entropy Formulation of Optimal and Adaptive Control Problem

Uploaded by

Cordelio Lear
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

17 - Discrete Time Entropy Formulation of Optimal and Adaptive Control Problem

Uploaded by

Cordelio Lear
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

IEEE TRANSACTIONSON AUTOMATIC CONTROL, VOL. 31, NO.

I , JULY 1992

UyllU (ty 12u where

I /// The algorithm used in this work to find the bounds on g , is as


follows.
1) A phase Q of g , is chosen.
+
2 ) 1 f , , I = ( A l l B , , ) / 2 is substituted in (B.la) and a com-
puter search is applied to find the minimum value g , so that if
0 20 60 100 0 20 60 100 I g , I 5: g , , (B.la), (B.lc) is true.
ffy2lU uy22u 3) A search on I f , , I is executed to find g,“ = min{ g , I all f , , } .
0 6 1 4) The process described in 1)-3) is repeated for a = 0,
5;..,355 deg.
5) The process described in 1)-4) is applied on (B.lb), (B.lc) to
0 2 0 5 find the minimum value denoted by g,”.
6 ) g , = max(gt, g,”) is chosen as a bound point on g , at
phase( g , ) = a,and first order interpolation between these points is
-0 2
0 20 60 100
0
0 20 60 100
the bound on g . ,
7) The process described in 1)-6) is repeated at several w
Fig. 6. Step responsesimulations.ij for step input in channel ioutput inj.
values.
8) In the frequency range w > 0.05, i.e., frequencies higher than
twice the system bandwidth, the bounds are calculated to satisfy
Gauss elimination method [4] applied on the latter equation gives the
only (B.lc).
following recursive equations for t i j i = 1, * * n: a ,

9) g , that satisfies the bounds is shaped.


10) f , , that satisfies @.la) is chosen. This is possible since
from@. la) f , , satisfies

where
max
D ,. . = Pi’,fUj + P;,,tUj (A.lb)
U = I; ”, i- 1 U#,,...,;

and [ P G ] is an ( n - k + 1) x n matrix transfer function, defined


recursively as follows:

REFERENCES
[l] S. Skogestad, M. Morari, and J. Doyle, “Robust control of ill-condi-
i 2 k ;j = I;.., n;k = l;.., n. (A.lc) tioned plants: High-purity distillation,” IEEE Trans. Automat.
Contr., vol. 33, pp. 1092-1105, Dec. 1988.
Guided by (A.la) find g i and f i j such that 1/(1 + g i / p i i ) is [2] 0. Yaniv and I. Horowitz, “A quantitative design method for MIMO
linear feedback systems having uncertain plants,” Int. J . Contr., vol.
asymptotically stable and
43, no. 2, pp. 401-421, 1986.
[3] 0. Yaniv and B. Schwartz, “A criterion for loop stability in the
Horowitz synthesis of MIMO feedback systems,” Int. J. Contr.,
submitted for publication.
[4] F. R. Gantmacher, The Theory of MATRICES. New York: Chesla,
Repeating the above process for i = 1,. . , n. G = diag( g i ) and 1960, p. 23.
F = [ f , , ] will guarantee that the system described in Fig. 1 is [5] I. Horowitz and M. Sidi, “Synthesis of feedback systems with large
asymptotically stable and satisfies the performance A i j 5 I t i j I 5 plant ignorance for prescribed time-domain tolerances,” Int. J.
Contr., vol. 16, no. 2, pp. 287-309, 1972.
B,,, i.e., the transfer functions t i j are bounded between two given
Bode curves A i j ( w ) and B i j ( w ) , which are called the closed-loop
tolerances. Discrete-Time Entropy Formulation of Optimal and
APPENDIX
B Adaptive Control Problems
The phase of Dij is not known, therefore, the worst case of Yweting A. Tsai, Francisco A. Casiello, and
inequality (3.la), (3.lb) is used, i.e., g , , f , , , and f I 2 which Kenneth A. Loparo
satisfy (B.la), (B.2b) will also satisfy (3.la), (3.lb)
Abstract-This note presents the discrete-time version of the entropy
formulation of optimal control problems developed by Saridis in 121.

Manuscript received January 5, 1990; revised November 22, 1991. This


work was supported by the U.S. Army through the NASA Lewis Research
Center under Grant NAG3-788.
The authors are with the Department of Systems Engineering and the
Center for Stochastic and Chaotic Processes in Science and Technology,
Case Western Reserve University, Cleveland, OH 44106.
IEEE Log Number 9108142.

0018-9286/92$03.00 0 1992 IEEE

~~
1084 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 31, NO. 7, JULY 1992

Given a dynamical system, the uncertainty in the selection of the terize that uncertainty by means of a probability density function
control is characterized by the probability distribution (density) function
which maximizes the total entropy. We establish the equivalence be-
p(u,, -, uN- = p(UN-'). The amount of information con-
tween the optimal control problem and the optimal entropy problem tained in p(UN-') can be measured by means of the Shannon
and then decompose the total entropy into a term associated with the entropy H ( V N - ' ) , where
certainty equivalent control law, the entropy of estimation, and the
so-called equivocation of the active transmission of information from
the controller to the estimator. This provides a useful framework for
studying the certainty equivalent and adaptive control laws.
* l n p ( u 0 ; * * , U N - ' ) dxo a * * dxN-1. (1.1.3)
INTRODUCTION
As u k = u,(x,), H ( U N - ' ) is a functional of X N - I and
In this note the discrete-time version of the entropy formulation
H ( U N - ' ) is the differential Shannon entropy associated with
for optimal control problems as developed by Saridis in [2] is
p(UN-1).
presented.
To incorporate the cost (1.1.2) into the problem we recognize that
The main idea is the following; given a dynamical system and a
if UN-' has density p(UN-'), then we can evaluate
control objective, the optimal control problem can be thought of in
terms of a decision maker, the controller, who has to select an
optimal policy from a set of admissible policies. The uncertainty
that the controller has with regard to the selection of the control is
characterized by means of a probability distribution function. It is
shown that if the probability distribution function is selected accord- where M L J(UN-') with equality holding when UN-' = UN-'*.
ing to the Jaynes principle of maximum entropy, then the solution of In order to find p(UN-'), we use Jaynes' principle of maximum
the optimal control problem and the problem of optimizing the total entropy which states that given statistics of a random variable with
entropy are equivalent. unknown probability distribution, the most likely distribution is the
The total entropy is a measure of the information contained in the one which has maximum entropy. In this context, we formulate the
probability distribution which characterizes the controller's uncer- following version of Saridis' theorem.
tainty regarding the selection of a closed-loop feedback control Theorem I: A closed-loop state feedback policy UN-' mini-
policy for a stochastic problem. A decomposition of the total mizes (1.1.2) subject to (1.1.1) if and only if U N - minimizes the '
entropy is given in terms of the entropy corresponding to the total entropy H ( U N - ' ) ,where
certainty equivalent law, the entropy of the estimator, and the
so-called equivocation entropy term.
We begin with a deterministic optimal control problem and obtain
the discrete-time version of Saridis' theorem. We then extend the p ( U N - ' ) is selected to maximize H ( U N - ' ) subject to the normal-
result to stochastic systems and we examine the optimal estimation ization constraint
problem. Examples and applications are included in a companion
Paper. / p(uN-1) dXN-1 = 1 (1.1.5)

I. THEDETERMINIsnC PROBLEM and

A. Problem Formulation E{ J ( U N - ' ) ) = M , with M = J * = J ( U N - ' * ) when


I
Consider the following discrete-time system: UN-' = UN-'* (1.1.6)
That is

Here xk E Q , C R " is the n-dimensional state vector, uk E ruC


R"' is an m-dimensional control vector, and fk(.; ): Z + X Q , X
I', Q , is the one step state transition map.
+
s.t. (1.1.5), (1.1.6), and (1.1.7).
We are also given a cost functional The proof of Theorem 1 is based on a representation of the total
entropy in terms of conditional entropies and a formulation of the
N- 1
J(UN-') = Rk(Xk+ll uk) + RN(xN) (1.1.2) corresponding entropy problem as a multistage optimization prob-
lem.
k=O
with UN-' = {uO,;*~,uN-,}andR k : Z + x Q k x r k + R + k =
0; * ., N are positive convex functions of the state and control C. Decomposition of the Total Entropy
variables. Proposition 1: The total entropy function H ( U N - ' ) can be
A control policy is said to be admissible if it is a closed-loop decomposed into a sum of conditional entropy terms, that is
state feedback control law, that is uk = uk(xk):Z+x 0, r,,, +
N- 1
0" c Q,. H ( UN-') = H ( uk 1 U k - ' ) , H ( t(0 I U - ' ) = H (
The objective is to find an admissible policy to minimize k=O
J( U N - I ) . ( 1.3.1 .a)
B. Statement of the Optimal Entropy Problem
We think of the control design problem in the following terms: a
controller or decision maker is uncertain about the selection of the
'
control policy U N - that optimizes the cost functional. We charac-

II I -
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 37, NO. 7, JULY 1992 1085

Proof: with
H(UN-') = -/ p(UN-')1np(UN-l) dkN-' R( I Uk-')= - 1 p ( uk 1 U , - ' ) In p ( I U,-') duk

= - J p ( UN- 1 IU y p (U N - ) and the expectation over U, is conditioned on U , - ' . Interchanging


the minimization and maximization operators with the expectation
lnp(u,-, IUN-2)p(UN-2) &YN-2 operator we obtain
H*( U N - ' * )

N ( u l I uo) + *. +
U0 Po

+EuN-*{min max H(U~-~ILI~-~)](1.4.6) * * a } ) .

uN-l PN-I

In p ( U N - 2 )d X N - 2 Define the cost-to-go W , in the following way:

[
= E C I ~ - 2- 1 p ( uN- 1 UN-2) W , = E,.-~{ min m a x ( H ( u k 1 U'-')
uk Pk
+ Wk+l)] (1.4.7)

*In p ( uN- 1 U N - ' ) dx,-


I1 with the boundary condition W, = 0, then W , is the optimal cost
for an ( N - k ) stage optimization problem. Equation (1.4.7)is the
- / p ( U N - 2 )In p ( U N - 2 )d X N - 2
Bellman equation for the optimal entropy problem.
Before we proceed, we need to deal with the global constraints
(1.4.2)and (1.4.3).
= H(UN-2) + H ( UN- 1 IUN-2) Recall (1.4.2)
and the result follows from induction.
We have decomposed the total uncertainty measure regarding the
selection of the control H ( U N - ' ) into a sum of "local" uncertainty
E"N-I{ J ( U N - ' ) } = E"N-1 { :I: + Rk RN] =M

measures regarding the selection of the control function uk at time which can be written as
k assuming that the policy U,-' is known.
We next set up the entropy problem as a multistage optimization
problem.
D. Formulation as a Multistage Optimization Problem
E,,,( Eu,{
Euk{ EuN-2{ 5' +
k=O
R, R N I UN-'
I
We use the decomposition obtained above to formulate a multi-
stage optimization problem.
This can be written as a collection of local constraints
Consider the problem
N- 1
min max H ( U ~ - ~ ) (1 -4.1) (1.4.9)
[IN-l p ( u N - 1 )

s.t. with M = J* when UN-'= UN-'*. Similarly, (1.4.3)can be


E{ J ( U N - l ) } = M , with M = J* when U N - ' = UN-'* written as
(1.4.2)
1
P ( u , ) U k - ' ) d X k = 1, k = O , * ' * , N - 1. (1.4.10)
/ p ( U N - ' )d X N - ' = 1. (1.4.3) Summarizing, the multistage optimization problem can be written as
Let follows:

H * ( U N - ' * ) = min max H(UN-l) minmax{H(ukIUk-') + Wk+l}


uk Pk
(IN-l p ( u N - 1 )

then using (1.3.1) and letting p k = p ( u , 1 U , - ' ) we can write


N- 1

where (1.4.4)follows from the fact that the probability characteriz-


ing the uncertainty regarding the selection of U, depends only on
Uk- 1 and not onthe future controls u ~ + ~ , *u ,- , *- ~, .Then
using (1.3.1.b) and W, satisfies the following boundary conditions:
H * ( U N - ' * ) = min max w,= 0
UN-I
p(UN-1)
WO= H*(UN-1*).
* ( H ( u o )+ E U o { W , I UO) + * * . +EUN-?
In the next section we establish the equivalence between the optimal
.{H(u,-, 1uN-2) -}) (1.4.5) control problem and the optimal entropy problem.
1086 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 37, NO. 7, JULY 1992

E. Equivalence Between the Optimal Control Problem and the following stochastic optimal control problem:
Optimal Entropy Problem
min j(uN-I)
UN- I WC)
The equivalence between the optimal control problem and the
optimal entropy problems is established in the following proposition I N-1

and corollary.
Proposition 2: j(uN-I)= E{ J ( u , - I ) 1 Y N - ' } = E 1 k- =C1- R , + R,I YN-l 1
(2.1 .1 .a)
Given: The deterministic optimal control (DOC) problem
min
UN- I
J(uN-') s.t.
xk+l =fk(xk, 'k, 'k) (2.1.1 .b)
N- 1
J(UN-') = Rk +R N Yk = gk(xk, wk) (2.1.1.c)
k=O
where the functions { R , } E oand the state and control vectors are
S.t.
defined as in Section I. Here { u k } and { wk} are mutually indepen-
xk+l = f k ( x k ,u k ) dent sequences of independent random variables with known distri-
as defined in Section I-A, and the deterministic optimal entropy bution. The initial state xo is random with known distribution and is
(DOE) problem independent of { u k } and [ w k } .The observation variable is yk E 0,,
c R P , Y k = { y o ; ., y k } , k = 0,.* N - 1 and E { . I Y k } de-
e ,

min max H(UN-') notes the conditional expectation over the smallest a-algebra with
UN- P(UN- 1)
respect to which the yk's are measurable.
W ( U N - I )= - / p(UN-')lnp(UN-I) d X N - ' (DOE) A control policy is said to be admissible if it is a closed-loop

s.t.

with M = J* if UN-' = UN-'*.


E{ J ( u , - I ) } =M k
-
feedback control law, that is uk = U k ( y k ) : Z + X Qk+'

+
1 times-
admissible policy which minimizes J ( U N - ' ) .
U

C Q,,, where 0:+' = 0, x * . . 0,. The objective is to find an


ru7 n u -+

Then: If there exists a unique minimizing solution UN-'* to the The stochastic optimal entropy problem can be stated as
DOC problem, it is also the solution of the DOE problem. More- min H ( u N - ' ) WE)
UN- 1
over, the maximizing probability density function p*( U N - ') char-
acterizing the uncertainty that the controller has with regard to the
selection of the control is H(u,-') = - / p ( U N - I ) l n p ( U N - ' )d Y N - '
N- 1
p * ( U N - I ) = e-Xo-Po( / = o Ri+R,v)
(1.5.1) where P ( U N - ' ) is selected to max H ( U N - ' )subject to
with
/ p ( U N - I )d Y N - ' = 1

E { E { J ( u ~ - ~y)NI - l } }
with M2 = j = j(?(IIN-')* when U N - ' = UN-'*.
The maximizing probability density function j*(U N - ') which
characterizes the uncertainty that the controller has with regard to
where the selection of the control is
vk = min ( R k + vk+l} @*(UN-') = e - i o - i o ( w
N- I
C R I + R N I yN-'})
uk
I=O
(2.1.1)
with
with
VN = R,.
N- 1 N- 1
Proof: Refer to [3]. i,, = fik, bo = ijk (2.1.2.a)
Corollary I: A necessary and sufficient condition for H(UN-I) k= 1 k=O

to be minimized when p ( U N - I ) is selected according to Jaynes


principle, subject to (1.1.6), is that U N - is selected as the solution
of the (DOC) problem.
Proof: The proof of Corollary 1 follows directly from the (2.1.2. b)
results given in [3].
Note: Proposition 2 states that the solution of the DOC problem where
is also the solution of the optimal entropy problem and Corollary 1 Fk = min E { R , + v ~1 y+k }~
states that solving the DOE problem solves the DOC problem, this ' k

then establishes Theorem 1. with


f" + R N .
II. THESTOCHASTICOPTIMAL AND ESTIMATION
CONTROL Now, the results of Proposition 1 directly apply and the results of
PROBLEMS Section I have been extended to the SOC problem. In the next
section we examine the estimation problem by using the results
A. The Stochastic Optimal Control Problem
obtained for the SOC problem, this is yet another consequence of
The results of the previous section can be extended to the the duality between optimal control and optimal estimation.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 37, NO. 7, JULY 1992 I087

B. The Optimal Estimation Problem control and the state estimate. Then (3.1.3) becomes
Given the stochastic dynamical system specified by (2.1.1), we
say that a state estimate i kis admissible if it is a measurable H ( UN- 1) = - J [ J p ( U N - kN- 1, 1) dkN- 1 1
function of past and current observations and past controls.
The objective of the stochastic optimal estimator^(SOE) problem * l n p ( U N - ' ) d Y N - ' . (3.1.4)
is to find an admissible estimate which minimizes J( X N - I ) Write
N- 1

I n
N- I
j(kN-')=E( [ ~ x k - i k ~ ~ 2 ~ .Y N (SOE)
-1 p(UN-',kN-') = p(u,IUk-',kN-l)p(iklkk-l)
k=O k= 1
(3.1.5)
If E( kN-l)is the probability density function characterizing the
estimator's uncertainty regarding the selection of the optimal esti- and
mate, the optimal estimator entropy (OEE) problem can be stated as

Rmin
N- I H(X~-I) (OEE)
(3.1.6)
H ( ~ N - I =
) - J p ( k ~ - 1In) p ( k ~ - 1 ~) Y N - '
Then, substituting into (3.1.3) and after some tedious algebraic
where P ( -?"-I) is selected to max H ( kNP1)
subject to manipulations [3], we obtain

J p ( 3 - I ) dYN-1 = 1 N- 1 N- 1

E { E { J ( ~ ~ y- N
~ -)l }I }= M ~ N- 1
with M - j = j ( k N - 1 ) . when 2 N - l - kN-1: - ~ ( i ~ I k ~ - l , U ~(3.1.7)
- ' )
3 - k=O
The probability density function p*( kN- I ) which characterizes
the uncertainty that the estimator has with regard to the selection of
the state estimate and maximizes the entropy is
N- 1
p*(kN-') = ,-b-io(E{ l l ~ k - ~ k l l yN-l))
zl (2.2.1)
k=O

with
N- 1 N- 1
i,, = bk, jio = rjk (2.2.2.a)
k= 1 k=O

(2.2.2. b)

where
r', = m i n E ( R , + Ck+,I y k }
uk The first term is then the entropy of the control conditioned on the
with state given by the state estimator, the second term is the entropy of
fN = R,. the state estimator, and the third term is the equivocation of the
active transmission of information from the estimator to the con-
In the next section we examine the decomposition of the total
troller.
entropy for a stochastic optimal control problem in terms of condi- This decomposition provides a useful physical interpretation of
tional entropies; this parallels the results given in Proposition 1. the approximations made by enforcing the separation principle, as is
common practice in classical adaptive control problems. It follows
III. STOCHASTIC w m PARTIALINFORMATION:
SYSTEMS
that the relationship between the total entropy of the closed-loop
DECOMP~SITION
OF THE TOTALENTROPY
optimal control problem and the entropy for the certainty equivalent
Consider the stochastic discrete-time system defined as in Section controller is obtained by subtracting the equivocation of the active
11-A transmission of information from the estimator to the controller
xk+l ' f k ( x k 3 uk> uk) (3.1.1) from the entropy for the estimator.
This is closely related to the problem of finding the cost for which
Y , = g,(x,,w,), k =0,l * - . . (3.1.2) a "passive learning" strategy is optimal, and relating it to the cost
for which an "active learning strategy" is optimal, as discussed in
The total entropy for this problem is [4], [5] for linear systems with unknown parameters in a finite set.
An example and applications to the control of systems with uncer-
H(uN-~=
) - J p(UN-')lnp(UN-I) ~ Y N - I . (3.1.3) tain observations are given in a companion paper [6].

By using marginal probabilities we can write the density function VI. CONCLUSIONS
) terms of the joint density p ( U N - ' , kN-'
p ( U N P 1in ) character- This note has presented a discrete-time version of Saridis' entropy
izing the controller's uncertainy regarding the selection of the formulation of optimal control problems. As many adaptive control
1088 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 37, NO. 7, JULY 1992

techniques are developed for discrete-time models and implemented a c ( e , e )= c(e,e) - co(e,e),
in a discrete-time setting, the results presented in this note should be
of some theoretical and practical value in characterizating and Ah(0,e) = h(e,e) - ho(e,e). (4)
providing a physical interpretation for some of the adaptive estima- We evaluate now, as in the above paper,’ the time derivative of the
tion and control algorithms which have been described in the function
literature.
V = (1/2)s=Ms. (5)
REFERENCES
E. T. Jaynes, “Information theory and statistical mechanics,” Phys. Using the skew-symmetry of J , after little algebra we get
Rev., vol. 106, pp. 620-630, May 1957.
G. N. Saridis, “Entropy formulation of optimal and adaptive control,” 6‘= s‘[ -AM(O)(i‘ - Ae)
IEEE Trans. Automat. Contr., vol. 33, pp. 713-721, Aug. 1988.
Y. A. Tsai, “Discrete time entropy formulation of stochastic optimal -AC(O,e)(e - Ae) - A h ( 0 , e ) + A M ] ( 6 )
control problems,” Ms. Project, Dep. Syst. Eng., Case Western
Reserve Univ., 1989.
with U = U’ + A Mand
F. A. Casiello and K. A. Loparo, “Optimal control of unknown MO = M , ( O ) ( ~-” A @ ) + c‘(e,e)(e- ne) + ho(e,e). (7)
parameter systems,’’ IEEE Trans. Automat. Contr., vol. 34, pp.
1092-1094, Oct. 1989. If we select A M as
F. A. Casiello and K. A. Loparo, “Optimal policies for passive
learning controllers,” Autornatica, vol. 25, no. 5 , pp. 757-764, A M= - Q ( e , e , t ) S G N ( s ) (8)
Sept. 1989. where Q = diag (Q1; . e , Q,,), while
F. A. Casiello and K. A. Loparo, “Entropy interpretation of active
and passive learning policies,” IEEE Trans. Automat. Contr., Q , ( e ,e , t ) 5 1 { A M ( O ) ( i ‘ - Ai.)
submitted for publication.
+AC(O,e)(i - Ae) + A h ( O , e ) } , l + yi (9)
Comments on “A New and Simple Algorithm for for R 3 yi > 0, then we obtain that
Sliding Mode Trajectory Control of Robot Arm” 6‘5 -YIlsll ( 10)
Piotr Myszkorowski where y = mini (y;). Note, that (10) guarantees that the trajectory
of (1) reach S in the finite time interval. Indeed, denoting the
Abstract-Using the approach presented in the above paper’ we
provide simplifications to the control algorithm presented there and we
largest matrix eigenvalue by a,,(.),
from (5) we get, that
yield stronger stability conditions.
v= (1/2)sWS 5 (l/2)~ax(~)llsllz. (11)
We consider the following n-DOF robot manipulator dynamics:
In view of (10) we then obtain that
~ ( e ) i +c(e,B)e+ h ( o , B ) = ~ +
( t ) w(t) (I)
‘12 If2
where 0 E R “ is the vector of generalized coordinates, M ( 0 ) is the ~5 -Y(2/Lax(M)) V
symmetric inertia matrix, C(0,0)0 represents Coriolis and cen- and finally
tripetal forces, h ( 0 , 0 ) comprises friction and gravitational forces,
w( t ) is disturbance, and U ( t ) is the input control torque vector. We ( d / dt ) V 1 f 25 - ( 1/2) y ( 2 / L a , ( M ) ) ” 2 .
choose C(0,B)in (1) so as to satisfy [l]
c(e,i)= (1/2)(12j(e) - J) Thus, for an arbitrary initial value V, of V , the trajectory of (1)
reaches the manifold S within the time A T , where
where J is skew-symmetric. We seek the control u ( t ) in order to
guarantee the uniformly asymptotically stable tracking of the refer- AT 5 2(aax(M)/2)1’2V~/6/2/y.
ence trajectory 0‘. With the help of the tracking error e = 0 - 0‘
we thus define, as in the above paper,’ the manifold S in R2”as the From (10) it now follows, that V = 0 thereafter. The assumption on
kernel of the mapping A in (2) then guarantees that e tend to 0 exponentially.
(e,e)-s(e) = A e + e , A =diag(h,,...,h,,), Xj>O. We stress the fact that no M ( 0 ) is required in (9), while the
foregoing development shows the stronger tracking error attenuation
(2) property of our algorithm compared to that of the paper.’ We can
Assuming the structurally congruent with (1) robot dynamics model also use in (8) the unit-vector control s/ I(s 11 [2] with the properly
chosen Q matrix. The implementation results of our algorithm with
+
M o ( e ) i co(e, + e)
e)e h o ( e , = u ( t ) (3) computed-torque alike versions of U’ were presented in [3].
we can estimate the discrepancies between (1) and (3) with the help
of REFERENCES
AM(0) = M(0) - Mo(0), R. Ortega and M. W. Spong, “Adaptive motion control of rigid
robots: A tutorial,” in Proc. 27th IEEE Conf. Decision Contr.,
Manuscript received October 19, 1990; revised March 15, 1991. Austin, TX, Dec. 1988, pp. 1575-1584.
The author is with the INRIA Project Prisme, B.P. 93, 06902 Sophia S. Gutman, “Uncertain dynamical systems-A Lyapunov min-max
Antipolis Cedex, France. approach,” IEEE Trans. Automat. Contr., vol. 24, pp. 437-443,
IEEE Log Number 9107976. June 1979.
‘ Y.-F. Chen, T. Mita, and S. Wahi, IEEE Trans. Automat. Contr., P. Myszkorowski, “A class of robust controllers for robot manipula-
vol. 35, pp. 828-829, July 1990. tors,” IFAC Zlth World Congress, Tallin, Estonia, 1990.

You might also like