Project 2
Project 2
Project 2
data from the inverted pendulum simulator. Issues and a Brief Discussion: 1. Linearization of nonlinear dynamical systems = F ( x, u ); y = H ( x, u ) around an operating The linearization of a nonlinear system x point (uo,yo), indirectly defining xo, describes approximately the local (in space and time) evolution of the variation between the variables x,u,y and their nominal values. It is found via a straightforward Taylor expansion:
= x
Notice that all partials are evaluated at the operating point (uo,yo, xo). Thus, for a constant operating point, the linearized system assumes a standard linear time-invariant system statespace description, for which standard identification procedures are applicable. For practical application, the correct identification of the linearization point is important. (Otherwise, the identification is biased.) There are two ways to achieve this: 1. Extract the mean of u and y and define u = u mean(u), y = y mean(y). (This assumes that mean(u), mean(y) is a steady-state.) Pros: Easy to implement, requires no other a priori knowledge. Cons: Identification bias can be unreasonable if transients or nonlinearities are present; DC information is intentionally dismissed. 2. Require the data to contain observation of the system at the operating point and estimate (uo,yo). Pros: Usually behaves better under transient bias; utilizes DC frequency information. Cons: Requires a specific identification protocol or additional information. 2. Parametrization of linearized system The general theory involves state-space descriptions and oberver forms (see Computer Controlled Systems class notes). This is greatly simplified for SISO systems of a given structure, and is easily derived using simple algebraic manipulations. For example, for the Torque-Pendulum problem, we know that the linearized system has transfer function of the form
F ( x 0 , u 0 ) F ( x 0 , u 0 ) H ( x 0 , u 0 ) H ( x 0 , u 0 ) x + u; y = x + u x u x u
b where b,,a are free parameters (b is a gain, is the friction coefficient and a is s + s + a
2
(gravitational constant/Length). We can then use an auxiliary 2nd-order filter (same order as the plant) to obtain the so-called equation error:
1 s + 2 fs + f
2
y =
b u [ s 2 + s + a ]y = bu s + s + a [ s 2 + s + a ] b y = 2 u 2 2 [ s + 2 fs + f ] [ s + 2 fs + f 2 ]
2
s 1 1 z = ( ) 2 y + ( a ) 2 y + (b) 2 u 2 2 2 [ s + 2 fs + f ] [ s + 2 fs + f ] [ s + 2 fs + f ]
def s 1 z = y f 2 y + f 2 2 y 2 2 [ s + 2 fs + f ] [ s + 2 fs + f ]
The terms in braces are signals that can be computed by filtering the I/O signals with known filters (e.g., use MATLABs lsim). In the ideal case where the data come from a second order system, the filter parameter f can be arbitrary. In practice, the data come from high-dimensional systems and are corrupted by noise so that an exact fit is not possible. The choice of f will then
affect the weighting of the data. In particular, f determines the identification bandwidth, i.e., the frequency range where the error is minimized (loose interpretation). For control applications, f is typically chosen around the intended closed loop bandwidth. 3. Parameter estimation The previous equation is now in the familiar linear model form for which standard estimation algorithms can be used. That is, where is a vector containing the adjustable parameters and W contains filtered signals. Solving for the Least-Squares estimate of is now an almost-straightforward procedure:
z = W
LS = (W T W ) 1W T z
There are two issues regarding the solution that often make a difference in practical applications. One is the estimation of Initial Conditions (IC) and the other is the regularization of the estimates. 1. IC Estimation is important for experiments beginning on a transient. It can be shown that for an observable linear system, the effect of IC can be included in the above parametrization by adding a term exp(Ft)x0 where F is the auxiliary filter state matrix and x0 are the unknown initial conditions, estimated together with the rest of the parameters. IC estimation is not important if the data collection begins at steady-state. 2. Regularization is a class of procedures to improve the numerical reliability of the estimated parameters. The general idea is to introduce soft or hard constraints, penalizing unreasonable parameter estimates. (Notice that in directions where the signal to noise ratio is poor, the least squares solution can be far from the actual parameter.) An example of a soft constraint is to solve the minimization problem min || y W || 2 +rT H T H , where r is a small parameter and H is a weighting matrix signifying the penalty for each of the parameters T T 1 T deviating from zero. Its solution is also simple: LS = (W W + rH H ) W z . When r approaches zero, one recovers the standard least-squares solution. The weighting matrix H serves to emphasize different aspects of the model, e.g., stability, decoupling. Other examples of regularization are the introduction of dither noise in the measurements and the solution of the ||, subject to: || z W || (1 + r ) E LS , where H is a minimization problem min|| H weighting matrix, r is a threshold parameter and squares solution.
4. Assessment of identified system quality In feedback systems terms, the quality of identification is quantified by the estimate of uncertainty in the identified system. There are different ways to describe the uncertainty structure, e.g., additive, multiplicative, feedback, coprime factor. The last form is the more general description of uncertainty but multiplicative and feedback are the easiest to visualize. For the multiplicative uncertainty, the true system has transfer function ( I + ) P where P is the nominal or identified plant and is the uncertainty. While the uncertainty as a system cannot be identified in a reliable way, its magnitude can be estimated. For example, using a FFT { y P[u ]} nonparametric method like FFT (better methods exist): | ( j) | , FFT { P[u ]} where P[u] is the estimate of y based on the input u and the nominal model P. The difference y P[u] is the residual error of the data fit. The interpretation of this estimate of uncertainty bound is that closing the loop with a controller for which the nominal co-sensitivity T = ( I + PC ) 1 PC satisfies, | T ( j) || ( j) |< 1, , closed loop stability is guaranteed for the perturbed system. Typically, any data fit is poor at high frequencies, implying that is large there, effectively imposing a maximum bandwidth constraint on the cosensitivity. An attractive feature of this bound is that it is expressed in terms independent of the
controller used to close the loop. Among the difficulties in using this approach, the most important are problems with unstable nominal systems, (e.g., when P[u] grows unbounded) and problems with uncertain low frequency behavior. These problems can be circumvented with more general uncertainty descriptions, but making the theory and interpretations considerably more complicated. Assignment: 1. Create an inverted pendulum simulation where the pendulum is first stabilized at the desired operating point (+5degrees from vertical) and then a low-amplitude excitation signal is introduced at the plant input. Collect the data, extract the appropriate portion, and construct the variation signals u, y. 2. Select an appropriate bandwidth f for the auxiliary filters and construct the regressor vector. 3. Estimate the parameters of the linearized system transfer function and create the corresponding transfer functions. Repeat 2 and 3 for different values of f to see its effect on the identified system (change f by an order of magnitude). 4. Generate an estimate of the maximum closed-loop bandwidth for each model, for which closed-loop stability can be reasonably expected. (Depending on the excitation sequence and IC, this step may fail due to the limitations of the multiplicative uncertainty structure.)