0% found this document useful (0 votes)
137 views39 pages

Ee 469

The document provides information about the Control Systems II course (EE 469) offered at the University of Benghazi's Faculty of Engineering Electrical and Electronic Engineering Department. The course will be delivered over 42 hours and include 9 computer-based assignments using Matlab and Simulink. Topics covered include systems modeling, controllability, observability, state feedback control design, observers, optimal control, compensators, and PID controllers. The instructor is Dr. Awad Shamekh and references include texts on modern control systems, control systems engineering, and linear control systems.

Uploaded by

arwa zeglam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views39 pages

Ee 469

The document provides information about the Control Systems II course (EE 469) offered at the University of Benghazi's Faculty of Engineering Electrical and Electronic Engineering Department. The course will be delivered over 42 hours and include 9 computer-based assignments using Matlab and Simulink. Topics covered include systems modeling, controllability, observability, state feedback control design, observers, optimal control, compensators, and PID controllers. The instructor is Dr. Awad Shamekh and references include texts on modern control systems, control systems engineering, and linear control systems.

Uploaded by

arwa zeglam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

EEE Department /Control systems II/EE 469

University of Benghazi
Faculty of Engineering
Electrical and Electronic Engineering Department

----------------------------------------------------------------------------------------------------------------------------
Control systems II EE469 Fall 2017/2018
Course Format and Assessment
The course will be delivered in 42 hours, with 9 computer based assignments (Matlab &Simulink) .
Final exam will be in January 2018. Course material will include handouts.
----------------------------------------------------------------------------------------------------------------------------
Control systems II (EE 469)
Pre-requisite EE 312
Course description & Learning Outcomes:
1. Systems and modelling
2. Controllability and observability
3. Full state feedback control design and Pole placement
4. Observer
5. Optimal control technique and linear quadratic regulator
6. Lead-lag compensators by root locus
7. PID controller

----------------------------------------------------------------------------------------------------------------------------
Instructor:
Dr. Awad Shamekh
EEE Dep.
Office room no. 89
Email: [email protected]
---------------------------------------------------------------------------------------------------------------------------
References:
- Modern control systems, R.C. Dorf
- Control systems Engineering, Norman Nise
- Linear control systems engineering, Morris Driels
- Matlab & Simulink help documents
----------------------------------------------------------------------------------------------------------------------------

1
EEE Department /Control systems II/EE 469

1 Systems and modelling

In control theory, systems can be classified as linear and non-linear. In linear systems the
parameters that describe the system do not change over the time (time in-variant).
However, in non-linear systems these parameters tend to vary with the time as the system
operating conditions change (time variant).

In technical applications modeling is a useful way to consolidate information a bout


systems and to uncover its characteristics. This can lead to important information
regarding the plant to be revealed. Determining suitable parameters for the model can
play an important role in specifying critical operating conditions, and through simulation
can prevent costly failures in future plant operation.

There are three different ways in which a model of a process can be constructed:
1) through theoretical analysis of the process.
2) By experimentation and data analysis.
3) A combination of experimentation and theoretical analysis.

A process can be described by many types of model, ranging from sophisticated and
detailed models to more basic ones depending on the complexity of the process and
requirements of the developed controller. Typically models can have either an internal
structure, such as a ‘‘state space’’ model or an external relationship, as with an ‘‘input-
output model (I/O)’’.

Figure 1.1 illustrates the response of a continuous-time system to a step input, a response
that can be simulated by a first order system (an integrator with a feedback gain indicated
in the figure).

2
EEE Department /Control systems II/EE 469

1.1. Continuous-time model

The corresponding model is described by the differential equation


dy −1 G
= y ( t ) + u(t) (1.1)
dt T T
Or by the transfer function
G
H ( s )= (1.2)
1+ sT
where T is the time constant of the system and G is the gain.
In equation (1.1) u represents the input (or the control) of the system and y the output.
This equation may be simulated by continuous means as illustrated in Figure 1.1. The
step response illustrated in Figure 1.1 reveals the speed of the output variations,
characterized by the time constant T, and the final value, characterized by the static gain
G.

2 Controllability and Observability

Two issues of importance in state-space control are referred to as controllability and


observability. These two issues are now treated in turn.
Controllability: A system is said to be completely state controllable if it is possible for the
control system to move the state vector from any initial value to any other value in a
finite time. This implies that it must be possible to move all the open-loop poles, by state
feedback, to any closed loop location.

3
EEE Department /Control systems II/EE 469

Testing for Controllability


Given that the state equation of the system under a particular loading condition can be
written as:

Ẋ (t )= AX (t )+BU (t ) (2.1)
Y(t) = CX(t) (2.2)
The standard test to determine whether a system is controllable or not is to find the rank
of the following
partitioned matrix:

[ B AB A2 B .. ..... . A n-1 B ]
(2.3)

If the rank of this matrix is equal to the number of rows in B, which is equal to the
number of states n, then the system is completely state controllable. We can define the
rank of a matrix as the number of linearly independent columns. For example, if we
define the following matrices:

1 5 1 1 5 6

[ ] [ ]
A= 2 6 1 B= 2 6 8 C= 1 2
3 1 2 3 1 4
3 1 [ ]
The rank of A is 3 and the rank of C is 2. However the rank of B is 2. The reason that B
is only of rank 2 is that the numbers in column 3 are equal to the numbers in column 1 +
column 2. For a given square matrix (i.e. a matrix that has the same number of rows and
columns), for it to be of full rank, i.e. of rank n, then its determinant must be none-zero.

Let us now go through an example to determine if a system is controllable or not. The


state matrices for an antennae positioning system are provided below:

4
EEE Department /Control systems II/EE 469

0 1 0 0

[
A= 0 −1 1 b= 0
0 0 −5 5 ] []
Let us go to construct the following matrix:
Pc =[ b Ab A 2 b ]
0 0 1 0 0 0

5[] [
b= 0 ; Ab= 0 −1 1 × 0 = 5
0 0 −5 5 −25 ][][ ]
0 1 0 0 5

[
A2 b=A × Ab= 0 −1 1 × 5 = −30
0 0 −5 −25 125 ][ ][ ]
0 0 5
2
Pc =[ b Ab A b ]= 0 5
[
−30
5 −25 125 ]
The determinant of this matrix is equal to -125. Since this value is not zero then the
system is controllable. If we wanted to check this in Matlab then we could type in the
following commands:
>> det([b A*b A*A*b]) =-125 ( nonzero)
>> rank([b A*b A*(A*b)]) =3

Observability
A system is said to be completely observable if it is possible to reconstruct the state
vector completely from measurements made at the system’s output. For example, if we
consider the position of a car, then it will be completely observable if by measuring speed
(through the speedometer), distance (through the odometer) and steering wheel position,
it is possible to determine where the car was parked before being driven. The check for

5
EEE Department /Control systems II/EE 469

observability is very similar to that for controllability. If the following derived matrix is
full rank, i.e. its determinant is none-zero then the system is observable:

Po =¿ (3.4)

Which is equivalent to
C

[ ]
CA
Po = C A 2

C An−1

Example:

A= −3 −4 ; b= 4 ; c=[ −1 −1 ]
[
−1 0 ] []
1

cA= [−1 −1 ] [−3 −4


−1 0 ]
=[ 4 4]

Po = c = −1 −1
[ ][
cA 4 4 ]
The determinant of the observability matrix is 0, which implies that the matrix is not full
rank and hence the system is not fully observable.

3 Full State Feedback control Design and pole placement


As shown in figure (3.1), In the first step in the state variable design, we assume all the
state variables are available for feedback control. In this case, we can design the control
input as

6
EEE Department /Control systems II/EE 469

Figure (3.1)

u=−Kx (3.1)

where K is the gain matrix. The full state feedback design is to decide a suitable feedback
gain matrix K. For the closed-loop control system, we have

ẋ= Ax+ B (−Kx ) =( A−BK ) x (3.2)


The stability of the closed-loop system depends on the characteristic polynomial of
( A−BK ) .

Pole Assignment: To assign the closed loop poles at given locations via full state
feedback. Pole assignment can be achieved if the system is completely controllable.

Example: Design a full state feedback control of the system described by

d3 y d2 y dy
3
+5 2
+3 + 2 y =u (3.3)
dt dt dt
such that the closed-loop poles are at {−1,−2,−3} respectively.
Solution:
The transfer function of the system is

7
EEE Department /Control systems II/EE 469

1
G ( s )= 3 2 (3.4)
s +5 s +3 s+2
If we realize the system in the controller canonical form, we have

0 1 0 0
ẋ= 0
[0 1 x+ 0 u(t )
−2 −3 −5 1 ] [] (3.5)

y= [ 1 0 0 ] x (3.6)

If the control input is designed as

u=[ k 1 k 2 k 3 ] x (3.7)

The closed – loop system is given by

0 1 0 0
ẋ= 0
[0
−2 −3 −5 1 ] []
1 x+ 0 (−[ k 1 k 2 k 3 ] x) (3.8)

0 1 0
ẋ=
[ 0 0 1
−(k 1+2) −(k 2 +3) −(k 3+5) ]
x=( A−BK ) x (3.9)

The closed-loop characteristic polynomial is


|sI−( A−BK )|=s 3 +(k 3 +5)s2 +(k 2 +3)s+(k 1 +2) (3.10)

Comparing it with the desired characteristic polynomial


( s+1 ) ( s +2 )+ ( s +3 )=s3 + s 2+11 s+6 (3.11)

we have k 3=1 , k 2=8 , k 1=4 ,

8
EEE Department /Control systems II/EE 469

K= [ 4 8 1 ] (3.12)

u=−[ 4 8 1 ] x=−4 x 1−8 x 2−x 3 (3.13)


dy d2 y
Note that for this realization we have x 1= y , x 2= ∧x 3= 2 .
dt dt
The control input is then expressed as
dy d 2 y
u=−4 y−8 − 2
dt d t

4 Observers Design (Estimators)

It is unrealistic to assume that all states of a system can be measured. It is therefore of


interest to determine the states of the system from the available measurements and a
model this technique is referred to state observer (estimator) design . It is assumed that
the system is described by the sampled model

ẋ ( t )= A x ( t ) + B u (t )
(4.1)
y ( t )=Cx(t)

The first condition that must confirmed in order to be able to design observers, the system
given by (4.1) is observable. Closed loop observers can be constructed for full states as
well as it might designed for reduced order.

As shown in Figure (4.1), the observer can be derived as

9
EEE Department /Control systems II/EE 469

Figure 4.1(a). State observer with the plant

Figure 8.4(b). State observer description

^ẋ ( t )= A x^ ( t ) + Bu ( t ) + L ( y ( t )−C x^ ( t ))
(4.2)
^ẋ ( t )=( A−LC ) ^x ( t )+ Bu ( t ) + Ly (t)

where

L ( y ( t )−C x^ ( t ) ) → the feedback on estimation error

L ∈ Rn × p → is the observer gain

Determination of the observer gain matrix ( L) in Equation (4.2) is the same mathematical
problem as the problem of determining the feedback matrix ( K ) in the pole placement problem
that was discussed before. The selection of the observer poles is a compromise between
sensitivity to measurement error and rapid recovery of initial errors. In other words. A fast
observer will converge quickly, but it will also be sensitive to measurement errors. Determining
the matrix L is the dual of finding the gain matrix K for pole placement by state feedback.

10
EEE Department /Control systems II/EE 469

Regulator problem

Selection of K such that

det ( sI− A +BK ) =P ( s ) (4.3)

where P(s) is the desired characteristic equation

Estimator problem

Selection of L such that

det ( sI− A + LC )=P ( s ) ( 4.4)

where P(s) is the desired characteristic equation

Theorem

“If the pair (A,C) is observable, then the eigenvalues of (A-LC) can be placed arbitrarily”

Design of observer

 If the pair (A,C) is completely observable, the dual system (AT,CT,BT,DT) is


completely reachable.
 Then we can design a compensator K for the dual system and place the
eigenvalues of (AT+CT K) arbitrarily.
 The eigenvalues of matrix (AT +CT K) = eigenvalues of its transpose (A+KT C).
 Define L = -KT.
Example (1)

Design a state observer for the continuous-time system in state-space form

ẋ (t )= −1 0 x ( t ) + 2 u (t)
[1 −1 ] [] 0
1
[ ]
y ( t )= 0
2
x (t)

It required to place the poles of the observer in {−4 ,−4 }.

11
EEE Department /Control systems II/EE 469

Solution

1. Check if the system observable: rank ([ CAC ])=2 then the system is observable.
l1
2. Let L= []
l2
be unknown observer gain.

3. Write the generic state estimation matrix as


−1

[
1 −1
l
A−LC= −1 0 − 1 0
l2 ] [ ][ ] 1
2
=
[ −1

1
l
2 1
1
−1− l2
2
]
1 1 1 1
2 ( 2 )
de t ( λI − A+ LC )=λ2 + 2+ l 2 λ+ l 2+ l 2 + l 1+ 1
2 2
4. Impose the polynomial equals the desired one ( λ+ 4)2= λ2 +8 λ+16
5. Solve the linear system of equations in l 1 ,l 2 and get
l 1=18 , l 2=12

d x^ (t )
6. The resulting observer is { = ( A−LC ) ^x (t ) + Bu ( t ) + Ly(t)}
dt
d x^ (t ) −1 −9
x^ ( t )+ 2 u ( t ) + 18 y (t)
dt [
=
1 −7 ] [] [ ]
0 12

 Remember don’t use “place” command if the desired observer at the same
location. Instead use “acker” command

Matlab code for observer design

clear all

close all

clc

A=[-1 0; 1 -1];

B=[2;0];

12
EEE Department /Control systems II/EE 469

C=[0 0.5];

D=0;

sys_c=ss(A,B,C,D)

Ob_AC=[C;C*A];

rank(Ob_AC)

P=[-4 -4]

L=acker(A',C',[-4 -4])'

obs_poles=eig(A-L*C)

---------------------------------

ans

L=

18

12

obs_poles =

-4.0000

-4.0000

5 Linear Quadratic Optimal Control

The theory of optimal control has been well developed for over forty years. With the
advances of computer technique, optimal control is now widely used in multi-disciplinary
applications. The state feedback and observer design approach is a fundamental tool in
the control of state equation systems. However, it is not always the most useful method.
Three obvious difficulties are:

 The translation from design specifications (maximum desired over and


undershoot, settling time, etc.) to desired poles is not always direct, particularly

13
EEE Department /Control systems II/EE 469

for complex systems; what is the best pole configuration for the given
specifications?
 In MIMO systems the state feedback gains that achieve a given pole configuration
is not unique. What is the best K for a given pole configuration?
 The eigenvalues of the observer should be chosen faster than those of the closed-
loop system. Is there any other criterion available to help decide one configuration
over another?
The methods that we will now introduce give answers to these questions. We will see
how the state feedback and observer gains can be found in an optimal way.

5.1 Linear-Quadratic Lyapunov theory

Let a system is defined

ẋ= Ax=f ( x ) A ∈ R n ×n (5.1)

Lyapunov quadratic function is defined

V ( x )=x T Px ,(5.2)

where

x is a real vector and P is a real symmetric matrix. The derivative of V (x )

∂V
=xT [ A T P+ PA ] x (5.3)
∂x

Lyapunov Conditions for Asymptotic Stability

Theorem

If there exist x T Px>0 such that x T [ A T P+ PA ] x< 0for all x ≠ 0 then ẋ= Ax is as
asymptotically stable.

In Matlab the command lyap(A,R) solves the equation AX+ X A T + R=0

14
EEE Department /Control systems II/EE 469

5.2 Matrix Positive / Negative Definite And Semi-Definite

 a matrix is positive definite if it’s symmetric and all its eigenvalues are
positive.
 a matrix is positive definite if it’s symmetric and all its pivots are positive.
 a matrix A is positive definite if x T A x> 0 for all vectors x ≠ 0.
 a matrix A is positive definite and only it can be written as A=RT R for
some possibly rectangular matrix R with independent columns.
 a matrix is positive semi-definite if all of its eigenvalues are non-negative.
Example (1)

Consider A matrix as:

A= 1 2
[ ]
2 1

If we perform elimination (subtract 2×row 1 from row 2) we get

A= [ 10 −32 ]
The pivots are 1 and −3. In particular, one of the pivots is −3, and so the matrix is not
positive definite. Were we to calculate the eigenvalues we’d see they are 3 and −1.

Example (2)

Consider the matrix

A= 1
4 [ 41]

Perform the following multiplication

15
EEE Department /Control systems II/EE 469

[ x y ] 1 4 x =Q A ( x , y )=x 2 + y 2 +8 xy
[ 4 1][ y ]

For [ x y ] =[ 1 −1 ] ≠ 0 Q A ( 1 ,−1 )=12+(−1)2+8 ( 1 ) ¿

Therefore, even though all of the entries of A are positive, A is not positive definite.

Example (3)

Consider the matrix

A= 1 −1
[
−1 4 ]
Then

Q A ( x , y ) =x2 + 4 y 2−2 xy=x 2−2 xy + y 2+ 3 y 2 =( x− y )2+ 3 y2

which can be seen to be always nonnegative. Furthermore, Q A ( x , y)=0 if and only if


x= y and y=0 so far all nonzero vectors ( x , y ) , Q A ( x , y)>0 and A is positive definite,
even though A does not have all positive entries.

Example (4)

Consider the diagonal matrix

1 0 0

[ ]
A= 0 3 0
0 0 2

Then Q A ( x , y , z )=x 2+3 y 2 +2 z 2, which can plainly be seen to be positive except when
( x , y , z )=( 0,0,0 ) . Therefore, A is positive definite.

The preceding example can be generalized as follows: if A is an n×n diagonal matrix

16
EEE Department /Control systems II/EE 469

Then A is:

1. positive definite if and only if d i >0 for i=1 ,2 , … .. ,n ,

2. negative definite if and only if d i <0 for i=1 ,2 , … .. ,n ,

3. positive semi-definite if and only d i ≥0 for i=1 , 2, … .. , n ,

4. negative semi-definite if and only d i ≤0 for i=1 , 2, … .. , n ,

5. indefinite if and only if d i >0 for some indices i ,1 ≤i ≤ n, and negative for other indices.

5.3 The Basic Optimal Control Problem

The mathematical statement of the optimal control problem consists of:

 a description of the system to be controlled


 a description of the system constraints and possible alternatives
 a description of the task to be accomplished
 a statement of the criterion for judging optimal performance
The dynamic system to be controlled is described in state variable form, i.e., in
continuous-time by

ẋ ( t )= Ax ( t ) +B u ( t ) , x ( 0 )=x 0 (5.4)

y ( t ) =Cx ( t ) ,(5.5)

17
EEE Department /Control systems II/EE 469

In the following, we assume that all the states are available as measurements, or
otherwise, that the system is observable, so that an observer can be constructed to
estimate the state. System constraints will sometimes exist on allowable values of the

state variables, or control inputs. The task to be performed usually takes the form of
additional

boundary conditions on the system state equations. For example, we could desire to
transfer the state x (t ) from a known initial state x (0) = x 0 to a specified final state
x f ( t f )=x d at a specified time t f , or at the minimum possible t f , see figure ( 5.1).

Figure 5.1. changes in x (t )

Often, the task to be performed is implicitly accounted for by the performance criterion.
The performance criterion, denoted J, is a measure of the quality of the system behavior.
Usually, we try to minimize or maximize the performance criterion by selection of the
control input.

The performance criterion, denoted J, is a measure of the quality of the system behavior.
Usually, we try to minimize or maximize the performance criterion by selection of the
control input as shown in Figure (5.2). For every u(t ) that is feasible (i.e., one that
performs the desired task while satisfying the system constraints), a system trajectory
x (t ) will be associated.

18
EEE Department /Control systems II/EE 469

Figure 5.2. The input u(t ) generates the trajectory x (t )

Figure (5.3) shows that a performance criterion could be to minimize the area under
2
‖x (t)‖ , as a way to select those controls that produce overall small transients in the
generated trajectory between x 0 and the final state.

Figure 5.3. Performance index criterion

2
Yet another possible performance criterion could be to minimize the area under ,‖u( t)‖
as a way to select those controls that use the least control effort.

A very important performance criterion which combines previous examples is the


quadratic performance criterion (LQR). This form is based on the second law of
Lyapunov, and plays an important role in the stability analysis. This criterion can be
expressed in a general form as plays


J=∫ [ x T ( τ ) Qx ( τ )+ uT ( τ ) Ru ( τ ) ] dτ (5.6)
0

19
EEE Department /Control systems II/EE 469

where in Equations (5.4) & (5.5)

x ∈ Rn ,u ∈ R p ,∧ y ∈ Rq

where Q is non-negative definite and R is positive definite. Then the optimal control
minimizing ( J ) is given by the linear state feedback law

u ( t )=−Kx ( t ) (5.7)

with

K=R−1 BP(5.8)

and where P is the unique positive definite solution to the matrix Algebraic Riccati
Equation (ARE)

AT P+ PA−PB R−1 BT P+Q=0(5.9)

This of course applicable provided that the system is controllable and observable.

In MATLAB K and P can be computed using

[K,P] = lqr(A,B,Q,R);

The matrices Q ∋ Rn × n (non-negative definite) and R ∋ R p × p(positive definite), are the


tuning parameters of the problem.

For example, the choice Q=C T C and R=λ I, with ¸ λ> 0 corresponds to making a
tradeoff between plant output and input “energies”, with the cost


2 2
[ ]
J=∫ ‖ y (τ)‖ + λ‖u ( τ )‖ dτ (5.10)
0

In other words

- λ small→ faster convergence of y (t)→ 0 but large control commands u(t ) , (high
gain control)

20
EEE Department /Control systems II/EE 469

- λ large → more sluggish response y (t) , but smaller control commands u(t ) ,
(low gain control)
- Alternatively, with actuator restrictions, we make λ¸ larger to reduce the control
effort at the expense of state performance.
5.4 Structure of Q and R

The choice of elements in Q and R requires some mathematical properties so that the
performance index J will have a guaranteed and well defined minimum in order that the
Riccati equation will have a solution.

These properties are:

1. Both Q and R must be symmetric matrices.


2. Q must be positive semi-definite and Q can be factored as Q=C T C where C is
any matrix such that (C, A) is observable.
3. R must be positive definite
These conditions are necessary and sufficient for the existence and uniqueness of the
optimal controller that will asymptotically stabilize the system.

Q and R are usually chosen to be purely diagonal matrices for two reasons:

1- it becomes easy to ensure the correct definiteness properties (the principal minor
of Q must be non-negative and that of R must be positive).
2- the diagonal element or the principal minor should penalize individual states or
inputs. However, it should be noted that the choice of Q is not unique. Several
different Q matrices will often yield the same controller, and there is an
equivalent diagonal version of Q in every case, so nothing is sacrificed in general
by making Q diagonal.
The relative weightings chosen for Q and R determine the relative emphasis placed upon
reducing and saving control energy. If it were important to minimize control effort at all
costs, then the numbers in R would be made much greater than those in Q for example.

21
EEE Department /Control systems II/EE 469

5.5 Design Of LQC using Matlab

Example (1)

clear all

close all

clc

A=[0 1;0 -1];

B=[0;1];

C=[1 0];

D=0;

sys_op=ss(A,B,C,D);

pole_op=eig(sys_op);

figure(1)

step (sys_op)

Q = eye(2);R = 1;

[K, P, e] = lqr(sys_op,Q,R)

poles_cl=e;

Ac = [(A-B*K)];

Bc = [B];

Cc = [C];

Dc = [D];

sys_cl = ss(Ac,Bc,Cc,Dc)

t = 0:0.01:5;

r =1*ones(size(t));

[y,t,x]=lsim(sys_cl,r,t);

plot(t,y)

22
EEE Department /Control systems II/EE 469

6 Design of lead-lag compensators using root locus

The root locus typically allows us to choose the proper loop gain to meet a transient
response specification. As the gain is varied, we move through different regions of
response. Setting the gain at a particular value yields the transient response dictated by
the poles at that point on the root locus. Thus, we are limited to those responses that exist
along the root locus.

6.1 Improving Transient Response

Flexibility in the design of a desired transient response can be increased if we can design
for transient responses that are not on the root locus. Figure 6.1(a) illustrates the concept.
Assume that the desired transient response, defined by percent overshoot and settling
time, is represented by point B. Unfortunately, on the current root locus at the specified
percent overshoot, we only can obtain the settling time represented by point A after a
simple gain adjustment. Thus, our goal is to speed up the response at A to that of B,
without affecting the percent overshoot. This increase in speed cannot be accomplished
by a simple gain adjustment, since point B does not lie on the root locus. Figure 6.1(b)
illustrates the improvement in the transient response we seek: The faster response has the
same percent overshoot as the slower response.

23
EEE Department /Control systems II/EE 469

Figure 6.1 a. Sample root locus, showing possible design point via gain adjustment (A)
and desired design point that cannot be met via simple gain adjustment (B); b. responses
from poles at A and B

Compensate, the system with additional poles and zeros, so that the compensated system
has a root locus that goes through the desired pole location for some value of gain.

6.2 Improving Steady-State Error Ideal Integral (lag) Compensation

One objective of this design is to improve the steady-state error without appreciably
affecting the transient response.

Steady-state error can be improved by placing an open-loop pole at the origin, because
this increases the system type by one. For example, a Type 0 system responding to a step
input with a finite error responds with zero error if the system type is increased by one.

24
EEE Department /Control systems II/EE 469

To see how to improve the steady-state error without affecting the transient response,
look at Figure 6.2(a). Here we have a system operating with a desirable transient
response generated by the closed-loop poles at A If we add a pole at the origin to increase
the system type, the angular contribution of the open-loop poles at point A is no longer
180°, and the root locus no longer goes through point A, as shown in Figure 6.2(b). To
solve the problem, we also add a zero close to the pole at the origin, as shown in Figure
6.2(c). Now the angular contribution of the compensator zero and compensator pole
cancel out, point A is still on the root locus, and the system type has been increased.
Furthermore, the required gain at the dominant pole is about the same as before
compensation.

25
EEE Department /Control systems II/EE 469

Figure 6.2. Pole at A is a. on the root locus without compensator; b. not on the root locus
with compensator pole added; c, approximately on the root locus with compensator pole
and zero added

Example:

Given the system of Figure 7.3(a), operating with a damping ratio of 0.174, show that the
addition of the ideal integral compensator shown in Figure 7.3(b) reduces the steady-state
error to zero for a step input without appreciably affecting transient response. The
compensating network is chosen with a pole at the origin to increase the system type and
a zero at -0.1 , close to the compensator pole, so that the angular contribution of the
compensator evaluated at the original, dominant, second-order poles is approximately
zero. Thus, the original, dominant, second-order closed-loop poles are still approximately
on the new root locus.

Solution:

We first analyze the uncompensated system and determine the location of the dominant,
second-order poles. Next we evaluate the uncompensated steady-state error for a unit step
input. The root locus for the uncompensated system is shown in Figure 7.4. A damping
26
EEE Department /Control systems II/EE 469

ratio of 0.174 is represented by a radial line drawn on the s-plane at 100.02°. Searching
along this line with the root locus program discussed at www.wiley.com/college/nise, we
find that the dominant poles are 0.694 ± /3.926 for a gain, K, of 164.6. Now look for the
third pole on the root locus beyond -10 on the real axis. Using the root locus program and
searching for the same gain as that of the dominant pair, K = 164.6, we find that the third
pole is approximately at -11.61. This gain yields Kp = 8.23. Hence, the steady-state error
is

1 1
e ( ∞ )= = =0.108
1+ K p 1+8.23

Where K p =lim
s →0
G( s)

Figure 6.3. Closed-loop system a. before compensation; b. after ideal integral


compensation

27
EEE Department /Control systems II/EE 469

Figure 6.4. Root locus for uncompensated system of Figure 6.3(A)

Adding an ideal integral compensator with a zero at - 0.1, as shown in Figure 7.3(b), we
obtain the root locus shown in Figure 7.5. The dominant second-order poles, the third
pole beyond -10, and the gain are approximately the same as for the uncompensated
system. Another section of the compensated root locus is between the origin and -0.1.
Searching this region for the same gain at the dominant pair, K ≈ 158.2, the fourth closed-
loop pole is found at -0.0902, close enough to the zero to cause pole-zero cancellation.
Thus, the compensated system's closed-loop poles and gain are approximately the same
as the uncompensated system's closed-loop poles and gain, which indicates that the
transient response of the compensated system is about the same as the uncompensated
system. However, the compensated system, with its pole at the origin, is a Type 1 system;
unlike the uncompensated system, it will respond to a step input with zero error. Figure

28
EEE Department /Control systems II/EE 469

7.6 compares the uncompensated response with the ideal integral compensated response.
The step response of the ideal integral compensated system approaches unity in the
steady state, while the uncompensated system approaches 0.892. Thus, the ideal integral
compensated system responds with zero steady-state error.

Figure 6.5 Root locus for compensated system of Figure 6.3(b)

29
EEE Department /Control systems II/EE 469

Figure 6.6 Ideal integral compensated system response and the uncompensated system
response

Matlab code for this example

% without compensation

clear all

close all

clc

S=tf('s');

A=1;

sys=A/((S+1)*(S+2)*(S+10));

figure (1)

rlocus(sys)

A=164.6;

sys=A/((S+1)*(S+2)*(S+10));

sys_c=feedback(sys,1);

figure (2)

step(sys_c)

30
EEE Department /Control systems II/EE 469

% with compensation

clear all

close all

clc

S=tf('s');

A=1;

sys=A*(S+0.1)/(S*(S+1)*(S+2)*(S+10));

figure (1)

rlocus(sys)

A=158.2;

sys=A*(S+0.1)/(S*(S+1)*(S+2)*(S+10));

sys_c=feedback(sys,1);

figure (2)

step(sys_c)

hold on

sys=A/((S+1)*(S+2)*(S+10));

sys_c=feedback(sys,1);

step(sys_c,'r')

6.3 Improving Transient Response Ideal Derivative (lead) Compensation

The transient response of a system can be selected by choosing an appropriate closed-


loop pole location on the s-plane. If this point is on the root locus, then a simple gain
adjustment is all that is required in order to meet the transient response specification. If
the closed-loop pole location is not on the root locus, then the root locus must be
reshaped so that the compensated (new) root locus goes through the selected closed-loop
pole location. In order to accomplish the latter task, poles and zeros can be added in the
forward path to produce a new open-loop function whose root locus goes through the

31
EEE Department /Control systems II/EE 469

design point on the .y-plane. One way to speed up the original system that generally
works is to add a single zero to the forward path. This zero can be represented by a
compensator whose transfer function is

Gc =s+ z c (6.7)

This function, the sum of a differentiator and a pure gain, is called an ideal derivative, or
PD controller. Judicious choice of the position of the compensator zero can quicken the
response over the uncompensated system. In summary, transient responses unattainable
by a simple gain adjustment can be obtained by augmenting the system's poles and zeros
with an ideal derivative compensator.

Example:

Given the system of Figure 6.7, design an ideal derivative compensator to yield a 16%
overshoot, with a threefold reduction in settling time.

Figure 6.7 Feedback control system

Solution

Let us first evaluate the performance of the uncompensated system operating with 16%
overshoot. The root locus for the uncompensated system is shown in Figure 7.8. Since
16% overshoot is equivalent to ζ =¿0.504, we search along that damping ratio line for an
odd multiple of 180° and find that the dominant, second-order pair of poles is at -1.205
±j2.064. Thus, the settling time of the uncompensated system is

32
EEE Department /Control systems II/EE 469

4 4
T s= = =3.320
ζ ω n 1.205

Since our evaluation of percent overshoot and settling time is based upon a second-order
approximation, we must check the assumption by finding the third pole and justifying the
second-order approximation. Searching beyond - 6 on the real axis for a gain equal to the
gain of the dominant, second-order pair, 43.35, we find a third pole at -7.59, which is
over six times as far from the jω-axis as the dominant, second-order pair. We conclude
that our approximation is valid.

Figure 6.8 Root locus for uncompensated system shown in Figure 6.7

33
EEE Department /Control systems II/EE 469

Figure 6.9 Compensated dominant pole superimposed over the uncompensated root locus

Now we proceed to compensate the system. First we find the location of the compensated
system's dominant poles. In order to have a threefold reduction in the settling time, the
compensated system's settling time will be one-third of uncompensated one. The new
settling time will be 1.107. Therefore, the real part of the compensated system's
dominant, second-order pole is

4 4
σ= = =3.613
T s 1.107

Figure 6.10 shows the designed dominant, second-order pole, with a real part equal to
-3.613 and an imaginary part of

ω d=3.613 tan ( 1800−120.260 ) =6.193

Next we design the location of the compensator zero. Input the uncompensated system's
poles and zeros in the root locus program as well as the design point -3.613 ±j6.193 as a

34
EEE Department /Control systems II/EE 469

test point. The result is the sum of the angles to the design point of all the poles and zeros
of the compensated system except for those of the compensator zero itself. The difference
between the result obtained and 180° is the angular contribution required of the
compensator zero. Using the open-loop poles shown in Figure 6.9 and the test point,
-3.613 +j6.193, which is the desired dominant second-order pole, we obtain the sum of
the angles as - 275.6°. Hence, the angular contribution required from the compensator
zero for the test point to be on the root locus is +275.6° - 180° = 95.6°. The geometry is
shown in Figure 6.10, where we now must solve for - σ , the location of the compensator
zero. From the figure,

6.193
=tan ⁡( 1800−95.6 0)
3.613−σ

Thus, σ = 3.006. The complete root locus for the compensated system is shown in Figure
6.11. For the uncompensated system, the estimate of the transient response is accurate
since the third pole is at least five times the real part of the dominant, second-order pair.

Figure 6.10. Evaluating the location of the compensating zero

35
EEE Department /Control systems II/EE 469

Figure 6.11. Root locus for the compensated system

Figure 6.12. Uncompensated and compensated system step responses

The final results are displayed in Figure 6.12, which compares the uncompensated system
and the faster compensated system.

36
EEE Department /Control systems II/EE 469

Matlab code for this example


clear all

close all

clc

S=tf('s');

A=1;

sys=A/(S*(S+4)*(S+6));

figure (1)

rlocus(sys)

A=43.35;

sys=A*(S+3.006)/(S*(S+4)*(S+6));

figure (2)

rlocus(sys)

sys_c=feedback(sys,1);

figure (3)

step(sys_c)

hold on

sys=A/(S*(S+4)*(S+6));

sys_c=feedback(sys,1);

step(sys_c,'r')

7 PID Controller
The PID controller “proportional-Integral-Derivative” consist of three terms of
controlling:

37
EEE Department /Control systems II/EE 469

a) Proportional action

Proportional feedback control can reduce error responses to disturbances, but the system
may have a steady-state offset or (drop) in response to a constant reference and may not
be entirely capable of rejecting a constant disturbance. In addition, proportional feedback
increases the speed of response but has a much larger transient over shoot.

b) Integral action

The primary reason for integral control is to reduce or eliminate constant steady-state
errors, but this benefit typically comes at the cost of worse transient response. By means
if the designer wishes to increase the dynamic speed of response with large integral gain,
then the response becomes very oscillatory. Away to avoid this behavior in some cases is
to use both proportional and integral control at the same time. In general, even though
integral control improves the steady-state tracking response, it has the effect of slowing
down the response if we keep the over shoot unchanged.

c) Derivative action:

It is used in-conjunction with proportional and/or integral feedback to increase the


damping and generally improve the stability of a system. In practice, pure derivative
feedback is not practical to implement for reasons that if the error signal of the system
remained constant the output of a derivative controller would zero, so proportional or
integral term would be needed to provide a control signal at this time.

Equation (7.1) displays the parallel continuous time PID algorithm as:
t

[
u ( t )=ú+ K c e ( t ) +
1

τI 0
¿ ¿
e (t ) d t + τ D
dt ]
de (t)
(7.1)

where
u(t )→ the controller output.
ú → the steady state controller output or could be regarded as the nominal value of the
controller.
Kc → the controller gain (usually dimensionless ).

38
EEE Department /Control systems II/EE 469

e (t ) → the error signal ≡ y sp ( output setpoint )− y m ( measured output ) .


τI→ the integral time or rest time and it has units of time.
τ D →the derivative time and it has units of time.

39

You might also like