0% found this document useful (0 votes)
148 views69 pages

Funda of Multivariable Control

The document discusses key topics in multivariable systems including: 1) How to model multivariable connections using block diagrams and calculate transfer matrices for series, parallel and feedback connections. 2) How to determine system poles from state-space models and transfer functions, including using the characteristic polynomial and minors of the transfer function matrix. 3) How to determine system zeros from state-space models by solving a generalized eigenvalue problem, and from transfer functions by finding values where the transfer function loses rank.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views69 pages

Funda of Multivariable Control

The document discusses key topics in multivariable systems including: 1) How to model multivariable connections using block diagrams and calculate transfer matrices for series, parallel and feedback connections. 2) How to determine system poles from state-space models and transfer functions, including using the characteristic polynomial and minors of the transfer function matrix. 3) How to determine system zeros from state-space models by solving a generalized eigenvalue problem, and from transfer functions by finding values where the transfer function loses rank.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 69

Chapter 1

Fundamental Issues in Multivariable


Systems

 Topics to be covered:

 Multivariable Connections

 Multivariable Poles and zeros

 Smith form of a Polynomial Matrix and Matrix Fraction


Description (MFD)

 Performance Specification

o Time Domain Performance

o Frequency Domain Performance

 Trade-offs in Frequency Domain

1
 Multivariable Connections
 Figure 1-1 shows cascade (series) interconnection of transfer matrices.

 The transfer matrix of the overall system is:

(1-1)

o Note that the transfer matrices must have suitable dimensions.

Figure 1-1 Cascade interconnection of transfer matrices

 Parallel interconnection of transfer matrices is shown in Figure 1-2.

 The transfer matrix of the overall system is:

(1-2)

o Note that the transfer matrices must have suitable dimensions.

Figure 1-2 Parallel interconnection of transfer matrices 2


Cont…

 Feedback interconnection of transfer matrices is shown in figure 1-3.

 The transfer matrix of the overall system is:

(1-3)

o Note that the transfer matrices must have suitable dimensions.

Figure 1-3 Feedback connection of transfer matrices

 A useful relation in multivariable is push-through rule.

 Push-through rule is defined by:

(1-4)

o The cascade and feedback rules can be combined to evaluate closed


loop transfer matrix from block diagram.
3
Cont…

 MIMO rule:

 To derive the output of a system, start from the output and write down
the blocks as you meet them when moving backward (against the
signal flow) towards the input.

 If you exit from a feedback loop then include a term (I-L)−1 or


(I+L)−1 according to the feedback sign.

 where L is the transfer function around that loop (evaluated


against the signal flow starting at the point of exit from the
loop).

 Parallel branches should be treated independently and their


contributions added together.

4
Cont…

Example 1-1

 Derive the transfer function of the system shown in figure 1-4.

Figure 1-4 System used in Example 1-1

 As it has two parallel ways from input to output by MIMO rule the
transfer function is:

(1-5)

5
 Multivariable Poles
 Poles of a system can be derived from the state space realizations and
the transfer functions.

1. Poles Derived from State Space Realizations


 For simplicity we here define the poles of a system in terms of the
eigenvalues of the state space A matrix.

• Definition 1-1
 The poles pi of a system with state-space description (A, B, C, D) are
eigenvalues λi(A), = 1, 2, ...,n of the matrix A.

 The pole polynomial or characteristic polynomial φ(s) is defined as φ(s) =


det(sI − A) .

 Thus the system’s poles are the roots of the characteristic polynomial

φ(s) = det(sI − A) = 0 (1-6)

6
Cont…

2. Poles Derived from Transfer Functions


 The poles of G(s) may be somewhat loosely defined as the finite
values s=p where G(p) has a singularity (is infinite).

 The following theorem from MacFarlane and Karcanias allows one


to obtain the poles directly from the transfer function matrix G(s).

Theorem 2-1

 The pole polynomial φ(s) corresponding to a minimal realization of


a system with transfer function G(s) is the least common
denominator of all non-identically-zero minors of all orders of G(s).

 A minor of a matrix is the determinant of the square matrix


obtained by deleting certain rows and/or columns of the matrix.

7
Cont…

Example 1-2

 Consider the square transfer function matrix

 The minors of order 1 are the four elements which all have (s+1)(s+2)
in the denominator.

 The minor of order 2 is the determinant

 Note the pole-zero cancellation when evaluating the determinant.

 The least common denominator of all the minors is then

 so a minimal realization of the system has two poles one at s = −1


and one at s = −2
8
Cont…

Example 1-3
 Consider the following system with 3 inputs and 2 outputs.

 The minors of order 1 are the elements of G(s), so they are

 The minor of order 2 corresponding to the deletion of different columns are

 By considering all minors we find their least common denominator

 The system therefore has four poles one at s = −1, one at s = 1 and two at s = −2 .

 From the above examples we see that the MIMO poles are essentially the poles of the
elements.

 However by looking at only the elements it is not possible to determine the


multiplicity of the poles.

9
 Multivariable Zeros
 Zeros of a system arises when competing effects internal to the
system are such that the output is zero even when the inputs and the
states are not themselves identically zero.

1. Zeros Derived from State Space Realizations


 Zeros are usually computed from a state space description of the
system.

 First note that the state space equations of a system may be


written as

(1-7)

 The zeros are then the values of s = z for which the polynomial system
matrix P(s) loses rank resulting in zero output for some nonzero input.

10
Cont…

 Numerically the zeros are found as non trivial solutions to the following
problem

(1-8)
 This is solved as a generalized eigenvalue problem.

 (In the conventional eigenvalue problem we have Ig = I ).

 The zeros resulting from a minimal realization are sometimes called the
transmission zeros.

 If one does not have a minimal realization, then numerical computations may
yield additional invariant zeros.

 These invariant zeros plus the transmission zeros are sometimes called the
system zeros.
 The invariant zeros can be further subdivided into input and output decoupling zeros.

 These cancel poles associated with uncontrollable or unobservable states and hence
have limited practical significance.
11
Cont…

 If the system outputs contain direct information about each of the


states and no direct connection from input, then there are no
transmission zeros.

 This would be the case if C = I , D = 0 , for example.

 For square systems with m=p inputs and outputs and n states, limits
on the number of transmission zeros are:

(1-9)

12
Cont…

• Example 1-4

 Consider the following state space realization:

Where,

 Determine the zeros of the system.

13
Cont…

Solution:

 First we derive the number of transmission zeros according to eqn 1-9.

 The product of CB is:

 So since D = 0 according to eqn. 1-9 the system has at most n − 2m +


rank(CB)= 3 − 2 = 1 zero.

 To find the value of zero we construct M and Ig as follows;

 Now by use of generalized eigenvalue problem one can find the zeros.

 The following Matlab m.file can be used to derive zeros.

o It shows that the system has a zero at s = −4.

14
Cont…

2. Zeros Derived from Transfer Functions


 For a SISO system the zeros zi are the solutions to G(zi) = 0.

 In general it can be argued that zeros are values of s at which G(s) loses
rank.

o This is the basis for the following definition of zeros for a multivariable
system (MacFarlane and Karcanias).

Definition 1-2:

 zi is a zero of G(s) if the rank of G(zi) is less than the normal rank of G(s) .

 The zero polynomial is defined as:

 Where nz is the number of finite zeros of G(s) .

– We do not consider zeros at infinity.

– We require that zi is finite.

– Recall that the normal rank of G(s) is the rank of G(s) at all values of s
except at a finite number of singularities (which are the zeros).
15
Cont…

 Note that this definition of zeros is based on the transfer function


matrix corresponding to a minimal realization of a system.

 These zeros are sometimes called transmission zeros but we will


simply call them zeros.

 We may sometimes use the term multivariable zeros to distinguish


them from the zeros of the elements of the transfer function
matrix.

 Finite Zeros from a Minimal State-Space Model:

 Given a minimal state-space realization (A,B,C,D) of G(s), the finite


zeros of G(s), are the same as the finite zeros of the system
matrix:

– Thus, the locations of the finite zeros of H(s) are the values of s for
which the system matrix of a minimal realization drops rank.
16
Cont…

 The following theorem is useful for hand calculating the zeros of a


transfer function matrix G(s) .

Theorem 1-2:

 The zero polynomial z(s) corresponding to a minimal realization of


the system is the greatest common divisor of all the numerators of
all order-r minors of G(s).

 where r is the normal rank of G(s) provided that these minors


have been adjusted in such a way as to have the pole
polynomial φ(s) as their denominators.

17
Cont…

Example 1-5

 Consider the transfer function matrix:

 The normal rank of G(s) is 2 and the minor of order 2 is the


determinant of G(s),

 From Theorem 1-1 the pole polynomial is φ(s) = s + 2 and


therefore the zero polynomial is z(s) = s − 4 .

 Thus G(s) has a single RHP-zero at s = 4 .

 This illustrates that in general multivariable zeros have no relationship


with the zeros of the transfer function elements.

o This is also shown by the following example where the system has
no zeros.
18
Cont…

Example 1-6

 Consider the following system:

 according to example 1-2 the pole polynomial is:

 The normal rank of G(s) is 2 and the minor of order 2 is the


determinant of G(s),

 where det(G(s)) with in φ(s) as its denominator is;

 Thus the zero polynomial is given by the numerator which is 1, and

 we find that the system has no multivariable zeros.

19
Cont…

Example 1-7

 Consider the system:

 The normal rank of G(s) is 1 and since there is no value of s for


which both elements become zero, G(s) has no zeros.

 In general non-square systems are less likely to have zeros than


square systems.

 The following is an example of a non square system which has a


zero.

20
Cont…

Example 1-8
 Consider the following system:

 according to example 1-3 the pole polynomial is:

• The minors of order 2 with φ(s) as their denominators are:

 The greatest common divisor of all the numerators of all order-2 minors is
z(s) = s −1.

 Thus, the system has a single RHP-zero located at s = 1.

 We also see from the last example that a minimal realization of a MIMO system
can have poles and zeros at the same value of s provided their directions are
different.
– This is discussed in the next section. 21
 Directions of Poles and Zeros
a) Zero directions:
 Let G(s) have a zero at s = z,

 Then G(s) loses rank at s = z and there will exist nonzero vectors
uz and yz such that

(1.10)

 here uz is defined as the input zero direction and yz is defined as


the output zero direction.

 We usually normalize the direction vectors to have unit length

i.e.

 From a practical point of view the output zero direction yz is usually of


more important than uz ;

 because yz gives information about which output _or combination of


outputs_ may be difficult to control.
22
Cont…

 In principle we may obtain uz and yz from a singular value


decomposition (SVD) of G(z) = YΣUH and

 we have that uz is the last column in U, corresponding to the zero


singular value of G(z) and yz is the last column of Y.

 A better approach numerically is to obtain uz from a state space


description using the generalized eigenvalue problem in eqn.1-8.

23
Cont…

b) Pole directions:
 Let G(s) have a pole at s = p.

 Then G(p) is infinite and we may somewhat crudely write as,

(1-11)

 where up is the input pole direction and yp the output pole


direction.

 As for uz and yz the vectors up and yp may be obtained from an SVD


of G(p) = YΣUH.

 Then up is the first column in U corresponding to the infinite


singular value and yp the first column in Y.

 If the inverse of G(p) exists then it follows from the SVD that

(1-12)

24
Cont…

 However if we have a state space realization of G(s) then it is better to

determine the pole directions from the right and left eigenvectors of A.

 Specifically if p is pole of G(s) then p is an eigenvalue of A.

 Let tp and qp be the corresponding right and left eigenvectors

 i.e. (1-13)

 then the pole directions are

(1-14)

25
Cont…

Example 1-9
 Consider the following plant:

 It has a RHP zero at z = 4 and a LHP pole at p = −2 .

 We will use an SVD of G(z) and G(p) to determine the zero and pole directions.

 But we stress that this is not a reliable method numerically.

 To find the zero direction consider

 The zero input and output directions are associated with the zero singular value
of G(z) and.

o we get

• We see from yz that the zero has a slightly larger component in the first output.

26
Cont…

 Next, to determine the pole directions consider

 The SVD as ε →0 yields

• The pole input and outputs directions are associated with the largest
singular value, 9.01/ ε and,

 we get

27
 Smith Form of a Polynomial Matrix
 Suppose that Π(s) is a polynomial matrix.

 Smith form of Π(s) is denoted by Πs(s) , and it is a pseudo diagonal in the


following form:

(1-15)

 and Πds(s) is a square diagonal matrices in the following form

(1-16)

 Furthermore,

 where χi derived by:

(1-17)

 gcd stands for greatest common divisor and monic is a polynomial that the coefficient
of its greatest degree is one. 28
Cont…

 The three elementary operations for a polynomial matrix are used to find Smith
form.

i. Multiplying a row or column by a constant;

ii. Interchanging two rows or two columns; and

iii. Adding a polynomial multiple of a row or column to another row or


column.

 These operations are carried out on a transfer matrix Π(s) by either pre-
multiplication or post-multiplication by unimodular polynomial matrices known
as elementary matrices.

 A polynomial matrix is unimodular if its inverse also is a polynomial matrix.

 Pre-multiplication of Π(s) by an elementary matrix produces the corresponding


row operation, while post-multiplication produces a column operation.

 Πs(s) is Smith form of Π(s) and they are said to be equivalent, if there exists a
set of elementary matrices Li and Ri such that

(1-18)
29
Cont…

Example 1-10

 Consider the following polynomial matrix:

 so we have

– and now are:

 the Smith of Π(s) is:

30
Cont…

Example 1-11

 Consider the following polynomial matrix:

 so we have

– and now are:

 the Smith of Π(s) is:

31
 Smith-McMillan Forms
 The Smith-McMillan form is used to determine the poles and zeros of
the transfer matrices of systems with multiple inputs and/or outputs.

 The transfer matrix is a matrix of transfer functions between the


various inputs and outputs of the system.

 The poles and zeros that are of interest are the poles and zeros of
the transfer matrix itself, not the poles and zeros of the individual
elements of the matrix.

 The locations of the poles of the transfer matrix are available by


inspection of the individual transfer functions, but the total number of
the poles and their multiplicity is not.

 The location of system zeros, or even their existence, is not available


by looking at the individual elements of the transfer matrix.

32
Cont…

 The transfer matrix will be denoted by G(s).

 The number of rows in G(s) is equal to the number of system


outputs; that will be denoted by m.

 The number of columns in G(s) is equal to the number of system


inputs; that will be denoted by p.

 Thus, G(s) is an m × p matrix of transfer functions.

 The normal rank of G(s) is r, where r ≤ min{p, m}.

33
Cont…
 Following theorem gives a diagonal form for a rational transfer-function matrix:

Theorem 1-3 (Smith-McMillan form):


 Let G(s) =[gij(s)] be an m × p matrix transfer function, where gij(s) are
rational scalar transfer functions, G(s) can be represented by:

(1-19)

o where Π(s) is an m × p polynomial matrix of rank r and

o DG(s) is the least common multiple of the denominators of all elements of G(s) .

 Then, G~(s) is Smith McMillan form of G(s) and can be derived directly by

(1-20)

 Where M(s) is:

(1-21)

o where {εi (s), δi (s)} is a pair of monic and coprime polynomials for i = 1,2,..., r .
34
Cont…

 Furthermore, εi(s) is a factor of εi+1(s) and δi+1(s) is a factor of δi(s).

 Elements of the matrix M(s) can be defined by:

(1-22)

 where are diagonal elements of Πs(s) (Smith form of Π(s) ) as

(1-23)

 We recall that a matrix G(s), and its Smith-McMillan form G~(s) are equivalent
matrices.

 Thus, there exist two unimodular matrices, L(s) and R(s), such that

(1-24)
 L(s) and R(s) are the unimodular matrices that convert Π(s) to its Smith form Πs(s).

 Then there exist two matrices L~(s) and R~(s) , such as

(1-25)

 where L~(s) and R~(s) are also unimodular and:

(1-26)
35
Cont…

 The poles and zeros of the transfer matrix G(s) can be found from the
elements of M(s).

 The pole polynomial is defined as

(1-27)

 The total number of poles in the system is given by deg(φ (s)),


which is known as the McMillan degree.

o It is the dimension of a minimal state-space representation of


G(s).

 A state-space representation of G(s) may be of higher order than


the McMillan degree, indicating pole-zero cancellations in the
system.

36
Cont…

 In similar fashion, the zero polynomial is defined as

(1-28)

 The roots of z(s) = 0 are known as the transmission zeros of G(s).

 It can be seen that any transmission zero of the system must be a


factor in at least one of the εi(s) polynomials.

 The normal rank of both M(s) and G(s) is r.

 It is clear that if any εi(s) be zero, then the rank of M(s) drops
below r.

 Therefore, since the ranks of M(s) and G(s) are always equal,
so G(s) loses rank.

 We illustrate the Smith-McMillan form by a simple example.

37
Cont…

Example 1-12
 Consider the following transfer-function matrix:

 We can then express G(s ) in the form:

 According example 1-10 the Smith form of Π(s) is:

 So the Smith McMillan form of G(s) is:

 Clearly the pole polynomial and the zero polynomial are:

38
Cont…

Example 1-13
 Consider the following example of a system with m = 3 outputs and p = 2 inputs.

 The transfer matrix is shown below;

 We can then express G(s ) in the form:

 According example 1-11 the Smith form of Π(s) is:

39
Cont…

 So the Smith McMillan form of G(s) is:

 Clearly the pole polynomial and the zero polynomial are:

40
 Matrix Fraction Description (MFD)
 A model structure that is related to the Smith-McMillan form is matrix
fraction description (MFD).

 There are two types, namely

o a right matrix fraction description (RMFD) and

o a left matrix fraction description (LMFD).

 First of all suppose G~(s) is a m×m matrix and is the Smith McMillan
form of G(s), define the following two matrices:

(1-29)

(1-30)

 where N(s) and D(s) are m×m matrices.

 Hence G~(s) , can be written as

(1-31)
41
Cont…

 Combining 1-25 and 1-31, we can write

(1-32)

 This is known as a right matrix fraction description (RMFD) where:

(1-33)

 If one start with G~(s) = D(s)−1N(s) then combning with 1-25

(1-34)

 This is known as a left matrix fraction description (LMFD) where:

(1-35)

42
Cont…

 The left and right matrix descriptions have been initially derived
starting from the Smith-McMillan form.

 Hence, the factors are polynomial matrices.

 However, it is immediate to see that they provide a more general


description.

 In particular, are generally matrices with


rational entries.

 One possible way to obtain this type of representation is to divide


the two polynomial matrices forming the original MFD by the same
(stable) polynomial.

43
Cont…

 We also observe that the RMFD (LMFD) is not unique, because, for any
nonsingular m×m matrix Ω(s) we can write G(s) as

(1-36)

 where Ω(s) is said to be a right common factor.

 When the only right common factors of GN(s) and GD(s) is unimodular
matrix, then, we say that GN(s) and GD(s) are right coprime.

 In this case, we say that the RMFD (GN(s),GD(s)) is irreducible.

 It is easy to see that when a RMFD is irreducible, then

o s = z is a zero of G(s) if and only if GN(s) loses rank at s = z ; and

o s = p is a pole of G(s) if and only if GD(s) is singular at s = p .


• This means that the pole polynomial of G(s) is φ(s) = det(GD(s)).

 An example showing the above concepts is considered next.

44
Cont…
Example 1-14

 Consider a 2× 2MIMO system having the transfer function

a) Find the Smith-McMillan form by performing elementary row and


column operations.

b) Find the poles and zeros.

c) Build a RMFD for the model.

45
Cont…

Solution:

a) We first compute its Smith-McMillan form by performing elementary


row and column operations.

 where

46
Cont…

b) We see that the observable and controllable part of the system has
zero and pole polynomials given by

 So the poles are -1, -1, -2 and -2 and zeros are -1.5 ± j3.97

c) To derive RMFD we need:

 So,

47
Cont…

 Matrix fraction description (MFD) can be extended to n ×m non square


matrix G(s).

 In RMFD, GN(s) is n ×m and GD(s) is m×m and

 in LMFD, .

 Following example shows the procedure of finding RMFD and LMFD for
a non square matrix G(s).

48
Cont…

Example 1-15

 Consider the following transfer matrix;

 Find RMFD and LMFD of the system.

49
Cont…

Solution:

 According example 1-13 the SMM form of G(s) is:

 To derive the RMFD and LMFD we must find the unimodular matrices
that convert Π(s) to Πs(s).

 So

50
Cont…

 Now we write G(s) according to G~(s) :

 Above equation leads to RMFD and

51
Cont…

 To derive LMFD the G~(s) must partitioned as LMFD so:

52
 Performance Specification
 There are some important definitions in performance specifications .

 Nominal stability NS:


 The system is stable with no model uncertainty.

 Nominal Performance NP:


 The system satisfies the performance specifications with no model
uncertainty.

 Robust stability RS:


 The system is stable for all perturbed plants about the nominal model
up to the worst case model uncertainty.

 Robust performance RP:


 The system satisfies the performance specifications for all perturbed
plants about the nominal model up to the worst case model uncertainty.

 Performance specification can be considered in time and frequency domain.

53
 Time Domain Performance

 Although closed loop stability is an important issue, the real objective


of control is to improve performance, that is,

 to make the output y(t) behave in a more desirable manner.

 So, The objective of this section is to discuss ways of evaluating closed


loop performance.

54
Cont…

 We knew that Step response analysis approach, often taken by


engineers when evaluating the performance of a control system.

 Consider the characteristics shown in Figure 1-5, one simulates the


response to a step in the reference input.

Figure 1-5: Step response of a system

55
Cont…

 Rise time, tr ,
o the time it takes for the output to first reach 90% of its final value, which is usually
required to be small.

 Settling time, ts ,
o the time after which the output remains within ± 5% ( or ± 2%) of its final value,
which is usually required to be small.

 Overshoot, P.O,
o the peak value divided by the final value, which should typically be less than 20% or
less.

 Decay ratio,
o the ratio of the second and first peaks, which should typically be 0.3 or less.

 Steady state offset, ess,


o the difference between the final value and the desired final value, which is usually
required to be small.

• Excess variation,
o the total variation (TV) divided by the overall change at steady state, which should be
as close to 1 as possible.
56
Cont…

 The total variation is the total movement of the output as illustrated in Fig. 1-6.

Figure 1-6: Total variation in the step response of a system

 For the cases considered here the overall change is 1, so the excess variation is
equal to the total variation.

(1-37)

 Note that the step response is equal to the integral of the corresponding impulse
response, e.g. set u=1 in the following convolution integral.

(1-38)

where g(τ ) is the impulse response.


57
Cont…

 One can compute the total variation as the integrated absolute area (1-
norm), of the corresponding impulse response

(1-39)

 ISE, IAE, ITSE, ITAE:

o These measures are integral squared error, integral absolute error,


integral time weighted squared error and integral time weighted
absolute error respectively.

 For example IAE is defined as

(1-40)

 The rise time and settling time are measures of the speed of the
response,

 whereas the overshoot, decay ratio, TV, ISE, IAE, ITSE, ITAE and
steady state offset are related to the quality of the response.

58
 Frequency Domain Performance
 The frequency response of the loop transfer function, L(jω) , is also
be used to characterize closed-loop performance.

 One advantage of the frequency domain compared to a step


response analysis is that it considers a broader class of signals
(sinusoids of any frequency).

o This makes it easier to characterize feedback properties, and in


particular system behaviors in the crossover (bandwidth) region.

 We will now describe some of the important frequency domain


measures used to assess performance, e.g.
o gain and phase margins,

o the maximum peaks of T and S (sensitivity and complementary sensitivity


functions), and

o the various definitions of crossover and bandwidth frequencies used to


characterize speed of response.
59
Cont…

 Let L(s) denote the loop transfer function of a system which is closed-loop stable
under negative feedback.

 A typical Bode plot and a typical Nyquist plot of L(jω) illustrating the gain
margin (GM) and phase margin (PM) are given in Figures 1-7 and 1-8,
respectively.

Figure 1-7: Bode plot of L(jω) . Figure 1-8: Nyquist plot of L(jω) .

 From Nyquist’s stability condition, the closeness of the curve L(jω) to the point -1 in the
complex plane is a good measure of how close a stable closed-loop system is to instability.

 We see from Figure 1-7 that GM measures the closeness of L(jω) to -1 along the real axis,
whereas PM is a measure along the unit circle.
60
Cont…

 More precisely, if the Nyquist plot of L(jω) crosses the negative real axis
between -1 and 0, then the (upper) gain margin is defined as

(1-41)

 where the phase crossover frequency 180 ω is where the Nyquist curve of L(jω)
crosses the negative real axis between -1 and 0, i.e.

(1-42)

 The phase margin is defined as

(1-43)

 where the gain crossover frequency ωc is the frequency where L(jω) crosses 1,
i.e.

(1.44)

 The PM is a direct safeguard against time delay uncertainty;

 the system becomes unstable if we add a time delay of

(1-45)

61
Cont…

 From the above arguments we see that the GM and PM provide stability margins
for gain and delay uncertainty.

 More generally, to maintain closed-loop stability, the Nyquist stability


condition tells us that the number of encirclements of the critical point -1 by
L(jω) must not change.

 the actual closest distance is equal to 1/Ms where Ms is the peak value
of the sensitivity S(jω) .

 In summary, specifications on the GM and PM (e.g. GM > 2 and PM >30o ) are


used to provide the appropriate trade-off between performance and stability
robustness.

 The maximum peaks of the sensitivity and complementary sensitivity functions


are defined as

(1-46)

– Since S+T=1 so S and T differ at most by 1.

– A large value of Ms therefore occurs if and only if MT is large.

62
Cont…

 We now give some justification for why we may want to reduce the value of Ms.

 Consider the one degree-of-freedom configuration in Figure 1-9.

Figure 1-9

 Let we define error signal as e = y − r , then

 without control and noise (u = n = 0 ), we have e = y - r = Gd d - r,


and

 with feedback control

 Thus, feedback control improves performance in terms of reducing /e/ at all


frequencies where /S/ < 1.

63
Cont…

 One may also view Ms as a robustness measure.

 To maintain closed-loop stability, we want L(jω) to stay away from


the critical point -1.

 According to Figure 1-10 the smallest distance between L(jω) and -1 is


Ms-1 and therefore for robustness, the smaller Ms , is better.

Figure 1-10: Nyquist plot of L( jω) .

64
Cont…

 In summary, both for stability and performance we want Ms close to 1.

 There is a close relationship between these maximum peaks and the


GM and PM.

 Specifically, for a given Ms we are guaranteed

(1-47)

 For example, with Ms = 2 we are guaranteed GM > 2 and PM > 29o .

 Similarly, for a given value of MT we are guaranteed

(1-48)

 and specifically with MT = 2 we have GM > 1.5 and PM > 29o .

65
 Trade-offs in Frequency Domain
 Consider the simple one degree-of-freedom configuration in Figure 1-9.

 The input to the controller K(s) is r − ym and the measured output


is ym = y + n where n is the measurement noise.

 Thus, the input to the plant is

(1-49)

 The objective of control is to manipulate u (design K) such that the


control error e remains small in spite of disturbances d and noises
n.

 The control error e is defined as

(1-50)

 where r denotes the reference value (set point) for the output.

66
Cont…

 The plant model is written as

(1-51)

 and for a one degree-of-freedom controller the substitution of 1-49 and 1-50
into 1-51 yields

(1-52)

• Or

(1-53)

 and hence the closed-loop response is

(1-54)

 The control error is

(1-55)

 where we have used the fact T + S = I .

 The corresponding plant input signal is

(1-56)
67
Cont…

 The most important design objectives which necessitate trade-offs in feedback


control are summarized below.
1. Performance, good disturbance rejection: needs large controller gains, i.e. L large or T
≈I.

2. Performance, good command following: L large or T ≈ I .

3. Stabilization of unstable plant: L large or T ≈ I .

4. Mitigation of measurement noise on plant outputs: L small or T ≈ 0 .

5. Small magnitude of input signals: K small and L small or T ≈ 0 .

6. Physical controller must be strictly proper: L has approach to 0 at high frequencies or T


≈0.

7. Nominal stability (stable plant): L small


68
End

69

You might also like