Ee 469
Ee 469
University of Benghazi
Faculty of Engineering
Electrical and Electronic Engineering Department
----------------------------------------------------------------------------------------------------------------------------
Control systems II EE469 Fall 2017/2018
Course Format and Assessment
The course will be delivered in 42 hours, with 9 computer based assignments (Matlab &Simulink) .
Final exam will be in January 2018. Course material will include handouts.
----------------------------------------------------------------------------------------------------------------------------
Control systems II (EE 469)
Pre-requisite EE 312
Course description & Learning Outcomes:
1. Systems and modelling
2. Controllability and observability
3. Full state feedback control design and Pole placement
4. Observer
5. Optimal control technique and linear quadratic regulator
6. Lead-lag compensators by root locus
7. PID controller
----------------------------------------------------------------------------------------------------------------------------
Instructor:
Dr. Awad Shamekh
EEE Dep.
Office room no. 89
Email: [email protected]
---------------------------------------------------------------------------------------------------------------------------
References:
- Modern control systems, R.C. Dorf
- Control systems Engineering, Norman Nise
- Linear control systems engineering, Morris Driels
- Matlab & Simulink help documents
----------------------------------------------------------------------------------------------------------------------------
1
EEE Department /Control systems II/EE 469
In control theory, systems can be classified as linear and non-linear. In linear systems the
parameters that describe the system do not change over the time (time in-variant).
However, in non-linear systems these parameters tend to vary with the time as the system
operating conditions change (time variant).
There are three different ways in which a model of a process can be constructed:
1) through theoretical analysis of the process.
2) By experimentation and data analysis.
3) A combination of experimentation and theoretical analysis.
A process can be described by many types of model, ranging from sophisticated and
detailed models to more basic ones depending on the complexity of the process and
requirements of the developed controller. Typically models can have either an internal
structure, such as a ‘‘state space’’ model or an external relationship, as with an ‘‘input-
output model (I/O)’’.
Figure 1.1 illustrates the response of a continuous-time system to a step input, a response
that can be simulated by a first order system (an integrator with a feedback gain indicated
in the figure).
2
EEE Department /Control systems II/EE 469
3
EEE Department /Control systems II/EE 469
Ẋ (t )= AX (t )+BU (t ) (2.1)
Y(t) = CX(t) (2.2)
The standard test to determine whether a system is controllable or not is to find the rank
of the following
partitioned matrix:
[ B AB A2 B .. ..... . A n-1 B ]
(2.3)
If the rank of this matrix is equal to the number of rows in B, which is equal to the
number of states n, then the system is completely state controllable. We can define the
rank of a matrix as the number of linearly independent columns. For example, if we
define the following matrices:
1 5 1 1 5 6
[ ] [ ]
A= 2 6 1 B= 2 6 8 C= 1 2
3 1 2 3 1 4
3 1 [ ]
The rank of A is 3 and the rank of C is 2. However the rank of B is 2. The reason that B
is only of rank 2 is that the numbers in column 3 are equal to the numbers in column 1 +
column 2. For a given square matrix (i.e. a matrix that has the same number of rows and
columns), for it to be of full rank, i.e. of rank n, then its determinant must be none-zero.
4
EEE Department /Control systems II/EE 469
0 1 0 0
[
A= 0 −1 1 b= 0
0 0 −5 5 ] []
Let us go to construct the following matrix:
Pc =[ b Ab A 2 b ]
0 0 1 0 0 0
5[] [
b= 0 ; Ab= 0 −1 1 × 0 = 5
0 0 −5 5 −25 ][][ ]
0 1 0 0 5
[
A2 b=A × Ab= 0 −1 1 × 5 = −30
0 0 −5 −25 125 ][ ][ ]
0 0 5
2
Pc =[ b Ab A b ]= 0 5
[
−30
5 −25 125 ]
The determinant of this matrix is equal to -125. Since this value is not zero then the
system is controllable. If we wanted to check this in Matlab then we could type in the
following commands:
>> det([b A*b A*A*b]) =-125 ( nonzero)
>> rank([b A*b A*(A*b)]) =3
Observability
A system is said to be completely observable if it is possible to reconstruct the state
vector completely from measurements made at the system’s output. For example, if we
consider the position of a car, then it will be completely observable if by measuring speed
(through the speedometer), distance (through the odometer) and steering wheel position,
it is possible to determine where the car was parked before being driven. The check for
5
EEE Department /Control systems II/EE 469
observability is very similar to that for controllability. If the following derived matrix is
full rank, i.e. its determinant is none-zero then the system is observable:
Po =¿ (3.4)
Which is equivalent to
C
[ ]
CA
Po = C A 2
⋮
C An−1
Example:
A= −3 −4 ; b= 4 ; c=[ −1 −1 ]
[
−1 0 ] []
1
Po = c = −1 −1
[ ][
cA 4 4 ]
The determinant of the observability matrix is 0, which implies that the matrix is not full
rank and hence the system is not fully observable.
6
EEE Department /Control systems II/EE 469
Figure (3.1)
u=−Kx (3.1)
where K is the gain matrix. The full state feedback design is to decide a suitable feedback
gain matrix K. For the closed-loop control system, we have
Pole Assignment: To assign the closed loop poles at given locations via full state
feedback. Pole assignment can be achieved if the system is completely controllable.
d3 y d2 y dy
3
+5 2
+3 + 2 y =u (3.3)
dt dt dt
such that the closed-loop poles are at {−1,−2,−3} respectively.
Solution:
The transfer function of the system is
7
EEE Department /Control systems II/EE 469
1
G ( s )= 3 2 (3.4)
s +5 s +3 s+2
If we realize the system in the controller canonical form, we have
0 1 0 0
ẋ= 0
[0 1 x+ 0 u(t )
−2 −3 −5 1 ] [] (3.5)
y= [ 1 0 0 ] x (3.6)
u=[ k 1 k 2 k 3 ] x (3.7)
0 1 0 0
ẋ= 0
[0
−2 −3 −5 1 ] []
1 x+ 0 (−[ k 1 k 2 k 3 ] x) (3.8)
0 1 0
ẋ=
[ 0 0 1
−(k 1+2) −(k 2 +3) −(k 3+5) ]
x=( A−BK ) x (3.9)
8
EEE Department /Control systems II/EE 469
K= [ 4 8 1 ] (3.12)
ẋ ( t )= A x ( t ) + B u (t )
(4.1)
y ( t )=Cx(t)
The first condition that must confirmed in order to be able to design observers, the system
given by (4.1) is observable. Closed loop observers can be constructed for full states as
well as it might designed for reduced order.
9
EEE Department /Control systems II/EE 469
^ẋ ( t )= A x^ ( t ) + Bu ( t ) + L ( y ( t )−C x^ ( t ))
(4.2)
^ẋ ( t )=( A−LC ) ^x ( t )+ Bu ( t ) + Ly (t)
where
Determination of the observer gain matrix ( L) in Equation (4.2) is the same mathematical
problem as the problem of determining the feedback matrix ( K ) in the pole placement problem
that was discussed before. The selection of the observer poles is a compromise between
sensitivity to measurement error and rapid recovery of initial errors. In other words. A fast
observer will converge quickly, but it will also be sensitive to measurement errors. Determining
the matrix L is the dual of finding the gain matrix K for pole placement by state feedback.
10
EEE Department /Control systems II/EE 469
Regulator problem
Estimator problem
Theorem
“If the pair (A,C) is observable, then the eigenvalues of (A-LC) can be placed arbitrarily”
Design of observer
ẋ (t )= −1 0 x ( t ) + 2 u (t)
[1 −1 ] [] 0
1
[ ]
y ( t )= 0
2
x (t)
11
EEE Department /Control systems II/EE 469
Solution
1. Check if the system observable: rank ([ CAC ])=2 then the system is observable.
l1
2. Let L= []
l2
be unknown observer gain.
[
1 −1
l
A−LC= −1 0 − 1 0
l2 ] [ ][ ] 1
2
=
[ −1
1
l
2 1
1
−1− l2
2
]
1 1 1 1
2 ( 2 )
de t ( λI − A+ LC )=λ2 + 2+ l 2 λ+ l 2+ l 2 + l 1+ 1
2 2
4. Impose the polynomial equals the desired one ( λ+ 4)2= λ2 +8 λ+16
5. Solve the linear system of equations in l 1 ,l 2 and get
l 1=18 , l 2=12
d x^ (t )
6. The resulting observer is { = ( A−LC ) ^x (t ) + Bu ( t ) + Ly(t)}
dt
d x^ (t ) −1 −9
x^ ( t )+ 2 u ( t ) + 18 y (t)
dt [
=
1 −7 ] [] [ ]
0 12
Remember don’t use “place” command if the desired observer at the same
location. Instead use “acker” command
clear all
close all
clc
A=[-1 0; 1 -1];
B=[2;0];
12
EEE Department /Control systems II/EE 469
C=[0 0.5];
D=0;
sys_c=ss(A,B,C,D)
Ob_AC=[C;C*A];
rank(Ob_AC)
P=[-4 -4]
L=acker(A',C',[-4 -4])'
obs_poles=eig(A-L*C)
---------------------------------
ans
L=
18
12
obs_poles =
-4.0000
-4.0000
The theory of optimal control has been well developed for over forty years. With the
advances of computer technique, optimal control is now widely used in multi-disciplinary
applications. The state feedback and observer design approach is a fundamental tool in
the control of state equation systems. However, it is not always the most useful method.
Three obvious difficulties are:
13
EEE Department /Control systems II/EE 469
for complex systems; what is the best pole configuration for the given
specifications?
In MIMO systems the state feedback gains that achieve a given pole configuration
is not unique. What is the best K for a given pole configuration?
The eigenvalues of the observer should be chosen faster than those of the closed-
loop system. Is there any other criterion available to help decide one configuration
over another?
The methods that we will now introduce give answers to these questions. We will see
how the state feedback and observer gains can be found in an optimal way.
V ( x )=x T Px ,(5.2)
where
∂V
=xT [ A T P+ PA ] x (5.3)
∂x
Theorem
If there exist x T Px>0 such that x T [ A T P+ PA ] x< 0for all x ≠ 0 then ẋ= Ax is as
asymptotically stable.
14
EEE Department /Control systems II/EE 469
a matrix is positive definite if it’s symmetric and all its eigenvalues are
positive.
a matrix is positive definite if it’s symmetric and all its pivots are positive.
a matrix A is positive definite if x T A x> 0 for all vectors x ≠ 0.
a matrix A is positive definite and only it can be written as A=RT R for
some possibly rectangular matrix R with independent columns.
a matrix is positive semi-definite if all of its eigenvalues are non-negative.
Example (1)
A= 1 2
[ ]
2 1
A= [ 10 −32 ]
The pivots are 1 and −3. In particular, one of the pivots is −3, and so the matrix is not
positive definite. Were we to calculate the eigenvalues we’d see they are 3 and −1.
Example (2)
A= 1
4 [ 41]
15
EEE Department /Control systems II/EE 469
[ x y ] 1 4 x =Q A ( x , y )=x 2 + y 2 +8 xy
[ 4 1][ y ]
Therefore, even though all of the entries of A are positive, A is not positive definite.
Example (3)
A= 1 −1
[
−1 4 ]
Then
Example (4)
1 0 0
[ ]
A= 0 3 0
0 0 2
Then Q A ( x , y , z )=x 2+3 y 2 +2 z 2, which can plainly be seen to be positive except when
( x , y , z )=( 0,0,0 ) . Therefore, A is positive definite.
16
EEE Department /Control systems II/EE 469
Then A is:
5. indefinite if and only if d i >0 for some indices i ,1 ≤i ≤ n, and negative for other indices.
ẋ ( t )= Ax ( t ) +B u ( t ) , x ( 0 )=x 0 (5.4)
y ( t ) =Cx ( t ) ,(5.5)
17
EEE Department /Control systems II/EE 469
In the following, we assume that all the states are available as measurements, or
otherwise, that the system is observable, so that an observer can be constructed to
estimate the state. System constraints will sometimes exist on allowable values of the
state variables, or control inputs. The task to be performed usually takes the form of
additional
boundary conditions on the system state equations. For example, we could desire to
transfer the state x (t ) from a known initial state x (0) = x 0 to a specified final state
x f ( t f )=x d at a specified time t f , or at the minimum possible t f , see figure ( 5.1).
Often, the task to be performed is implicitly accounted for by the performance criterion.
The performance criterion, denoted J, is a measure of the quality of the system behavior.
Usually, we try to minimize or maximize the performance criterion by selection of the
control input.
The performance criterion, denoted J, is a measure of the quality of the system behavior.
Usually, we try to minimize or maximize the performance criterion by selection of the
control input as shown in Figure (5.2). For every u(t ) that is feasible (i.e., one that
performs the desired task while satisfying the system constraints), a system trajectory
x (t ) will be associated.
18
EEE Department /Control systems II/EE 469
Figure (5.3) shows that a performance criterion could be to minimize the area under
2
‖x (t)‖ , as a way to select those controls that produce overall small transients in the
generated trajectory between x 0 and the final state.
2
Yet another possible performance criterion could be to minimize the area under ,‖u( t)‖
as a way to select those controls that use the least control effort.
∞
J=∫ [ x T ( τ ) Qx ( τ )+ uT ( τ ) Ru ( τ ) ] dτ (5.6)
0
19
EEE Department /Control systems II/EE 469
x ∈ Rn ,u ∈ R p ,∧ y ∈ Rq
where Q is non-negative definite and R is positive definite. Then the optimal control
minimizing ( J ) is given by the linear state feedback law
u ( t )=−Kx ( t ) (5.7)
with
K=R−1 BP(5.8)
and where P is the unique positive definite solution to the matrix Algebraic Riccati
Equation (ARE)
This of course applicable provided that the system is controllable and observable.
[K,P] = lqr(A,B,Q,R);
For example, the choice Q=C T C and R=λ I, with ¸ λ> 0 corresponds to making a
tradeoff between plant output and input “energies”, with the cost
∞
2 2
[ ]
J=∫ ‖ y (τ)‖ + λ‖u ( τ )‖ dτ (5.10)
0
In other words
- λ small→ faster convergence of y (t)→ 0 but large control commands u(t ) , (high
gain control)
20
EEE Department /Control systems II/EE 469
- λ large → more sluggish response y (t) , but smaller control commands u(t ) ,
(low gain control)
- Alternatively, with actuator restrictions, we make λ¸ larger to reduce the control
effort at the expense of state performance.
5.4 Structure of Q and R
The choice of elements in Q and R requires some mathematical properties so that the
performance index J will have a guaranteed and well defined minimum in order that the
Riccati equation will have a solution.
Q and R are usually chosen to be purely diagonal matrices for two reasons:
1- it becomes easy to ensure the correct definiteness properties (the principal minor
of Q must be non-negative and that of R must be positive).
2- the diagonal element or the principal minor should penalize individual states or
inputs. However, it should be noted that the choice of Q is not unique. Several
different Q matrices will often yield the same controller, and there is an
equivalent diagonal version of Q in every case, so nothing is sacrificed in general
by making Q diagonal.
The relative weightings chosen for Q and R determine the relative emphasis placed upon
reducing and saving control energy. If it were important to minimize control effort at all
costs, then the numbers in R would be made much greater than those in Q for example.
21
EEE Department /Control systems II/EE 469
Example (1)
clear all
close all
clc
B=[0;1];
C=[1 0];
D=0;
sys_op=ss(A,B,C,D);
pole_op=eig(sys_op);
figure(1)
step (sys_op)
Q = eye(2);R = 1;
[K, P, e] = lqr(sys_op,Q,R)
poles_cl=e;
Ac = [(A-B*K)];
Bc = [B];
Cc = [C];
Dc = [D];
sys_cl = ss(Ac,Bc,Cc,Dc)
t = 0:0.01:5;
r =1*ones(size(t));
[y,t,x]=lsim(sys_cl,r,t);
plot(t,y)
22
EEE Department /Control systems II/EE 469
The root locus typically allows us to choose the proper loop gain to meet a transient
response specification. As the gain is varied, we move through different regions of
response. Setting the gain at a particular value yields the transient response dictated by
the poles at that point on the root locus. Thus, we are limited to those responses that exist
along the root locus.
Flexibility in the design of a desired transient response can be increased if we can design
for transient responses that are not on the root locus. Figure 6.1(a) illustrates the concept.
Assume that the desired transient response, defined by percent overshoot and settling
time, is represented by point B. Unfortunately, on the current root locus at the specified
percent overshoot, we only can obtain the settling time represented by point A after a
simple gain adjustment. Thus, our goal is to speed up the response at A to that of B,
without affecting the percent overshoot. This increase in speed cannot be accomplished
by a simple gain adjustment, since point B does not lie on the root locus. Figure 6.1(b)
illustrates the improvement in the transient response we seek: The faster response has the
same percent overshoot as the slower response.
23
EEE Department /Control systems II/EE 469
Figure 6.1 a. Sample root locus, showing possible design point via gain adjustment (A)
and desired design point that cannot be met via simple gain adjustment (B); b. responses
from poles at A and B
Compensate, the system with additional poles and zeros, so that the compensated system
has a root locus that goes through the desired pole location for some value of gain.
One objective of this design is to improve the steady-state error without appreciably
affecting the transient response.
Steady-state error can be improved by placing an open-loop pole at the origin, because
this increases the system type by one. For example, a Type 0 system responding to a step
input with a finite error responds with zero error if the system type is increased by one.
24
EEE Department /Control systems II/EE 469
To see how to improve the steady-state error without affecting the transient response,
look at Figure 6.2(a). Here we have a system operating with a desirable transient
response generated by the closed-loop poles at A If we add a pole at the origin to increase
the system type, the angular contribution of the open-loop poles at point A is no longer
180°, and the root locus no longer goes through point A, as shown in Figure 6.2(b). To
solve the problem, we also add a zero close to the pole at the origin, as shown in Figure
6.2(c). Now the angular contribution of the compensator zero and compensator pole
cancel out, point A is still on the root locus, and the system type has been increased.
Furthermore, the required gain at the dominant pole is about the same as before
compensation.
25
EEE Department /Control systems II/EE 469
Figure 6.2. Pole at A is a. on the root locus without compensator; b. not on the root locus
with compensator pole added; c, approximately on the root locus with compensator pole
and zero added
Example:
Given the system of Figure 7.3(a), operating with a damping ratio of 0.174, show that the
addition of the ideal integral compensator shown in Figure 7.3(b) reduces the steady-state
error to zero for a step input without appreciably affecting transient response. The
compensating network is chosen with a pole at the origin to increase the system type and
a zero at -0.1 , close to the compensator pole, so that the angular contribution of the
compensator evaluated at the original, dominant, second-order poles is approximately
zero. Thus, the original, dominant, second-order closed-loop poles are still approximately
on the new root locus.
Solution:
We first analyze the uncompensated system and determine the location of the dominant,
second-order poles. Next we evaluate the uncompensated steady-state error for a unit step
input. The root locus for the uncompensated system is shown in Figure 7.4. A damping
26
EEE Department /Control systems II/EE 469
ratio of 0.174 is represented by a radial line drawn on the s-plane at 100.02°. Searching
along this line with the root locus program discussed at www.wiley.com/college/nise, we
find that the dominant poles are 0.694 ± /3.926 for a gain, K, of 164.6. Now look for the
third pole on the root locus beyond -10 on the real axis. Using the root locus program and
searching for the same gain as that of the dominant pair, K = 164.6, we find that the third
pole is approximately at -11.61. This gain yields Kp = 8.23. Hence, the steady-state error
is
1 1
e ( ∞ )= = =0.108
1+ K p 1+8.23
Where K p =lim
s →0
G( s)
27
EEE Department /Control systems II/EE 469
Adding an ideal integral compensator with a zero at - 0.1, as shown in Figure 7.3(b), we
obtain the root locus shown in Figure 7.5. The dominant second-order poles, the third
pole beyond -10, and the gain are approximately the same as for the uncompensated
system. Another section of the compensated root locus is between the origin and -0.1.
Searching this region for the same gain at the dominant pair, K ≈ 158.2, the fourth closed-
loop pole is found at -0.0902, close enough to the zero to cause pole-zero cancellation.
Thus, the compensated system's closed-loop poles and gain are approximately the same
as the uncompensated system's closed-loop poles and gain, which indicates that the
transient response of the compensated system is about the same as the uncompensated
system. However, the compensated system, with its pole at the origin, is a Type 1 system;
unlike the uncompensated system, it will respond to a step input with zero error. Figure
28
EEE Department /Control systems II/EE 469
7.6 compares the uncompensated response with the ideal integral compensated response.
The step response of the ideal integral compensated system approaches unity in the
steady state, while the uncompensated system approaches 0.892. Thus, the ideal integral
compensated system responds with zero steady-state error.
29
EEE Department /Control systems II/EE 469
Figure 6.6 Ideal integral compensated system response and the uncompensated system
response
% without compensation
clear all
close all
clc
S=tf('s');
A=1;
sys=A/((S+1)*(S+2)*(S+10));
figure (1)
rlocus(sys)
A=164.6;
sys=A/((S+1)*(S+2)*(S+10));
sys_c=feedback(sys,1);
figure (2)
step(sys_c)
30
EEE Department /Control systems II/EE 469
% with compensation
clear all
close all
clc
S=tf('s');
A=1;
sys=A*(S+0.1)/(S*(S+1)*(S+2)*(S+10));
figure (1)
rlocus(sys)
A=158.2;
sys=A*(S+0.1)/(S*(S+1)*(S+2)*(S+10));
sys_c=feedback(sys,1);
figure (2)
step(sys_c)
hold on
sys=A/((S+1)*(S+2)*(S+10));
sys_c=feedback(sys,1);
step(sys_c,'r')
31
EEE Department /Control systems II/EE 469
design point on the .y-plane. One way to speed up the original system that generally
works is to add a single zero to the forward path. This zero can be represented by a
compensator whose transfer function is
Gc =s+ z c (6.7)
This function, the sum of a differentiator and a pure gain, is called an ideal derivative, or
PD controller. Judicious choice of the position of the compensator zero can quicken the
response over the uncompensated system. In summary, transient responses unattainable
by a simple gain adjustment can be obtained by augmenting the system's poles and zeros
with an ideal derivative compensator.
Example:
Given the system of Figure 6.7, design an ideal derivative compensator to yield a 16%
overshoot, with a threefold reduction in settling time.
Solution
Let us first evaluate the performance of the uncompensated system operating with 16%
overshoot. The root locus for the uncompensated system is shown in Figure 7.8. Since
16% overshoot is equivalent to ζ =¿0.504, we search along that damping ratio line for an
odd multiple of 180° and find that the dominant, second-order pair of poles is at -1.205
±j2.064. Thus, the settling time of the uncompensated system is
32
EEE Department /Control systems II/EE 469
4 4
T s= = =3.320
ζ ω n 1.205
Since our evaluation of percent overshoot and settling time is based upon a second-order
approximation, we must check the assumption by finding the third pole and justifying the
second-order approximation. Searching beyond - 6 on the real axis for a gain equal to the
gain of the dominant, second-order pair, 43.35, we find a third pole at -7.59, which is
over six times as far from the jω-axis as the dominant, second-order pair. We conclude
that our approximation is valid.
Figure 6.8 Root locus for uncompensated system shown in Figure 6.7
33
EEE Department /Control systems II/EE 469
Figure 6.9 Compensated dominant pole superimposed over the uncompensated root locus
Now we proceed to compensate the system. First we find the location of the compensated
system's dominant poles. In order to have a threefold reduction in the settling time, the
compensated system's settling time will be one-third of uncompensated one. The new
settling time will be 1.107. Therefore, the real part of the compensated system's
dominant, second-order pole is
4 4
σ= = =3.613
T s 1.107
Figure 6.10 shows the designed dominant, second-order pole, with a real part equal to
-3.613 and an imaginary part of
Next we design the location of the compensator zero. Input the uncompensated system's
poles and zeros in the root locus program as well as the design point -3.613 ±j6.193 as a
34
EEE Department /Control systems II/EE 469
test point. The result is the sum of the angles to the design point of all the poles and zeros
of the compensated system except for those of the compensator zero itself. The difference
between the result obtained and 180° is the angular contribution required of the
compensator zero. Using the open-loop poles shown in Figure 6.9 and the test point,
-3.613 +j6.193, which is the desired dominant second-order pole, we obtain the sum of
the angles as - 275.6°. Hence, the angular contribution required from the compensator
zero for the test point to be on the root locus is +275.6° - 180° = 95.6°. The geometry is
shown in Figure 6.10, where we now must solve for - σ , the location of the compensator
zero. From the figure,
6.193
=tan ( 1800−95.6 0)
3.613−σ
Thus, σ = 3.006. The complete root locus for the compensated system is shown in Figure
6.11. For the uncompensated system, the estimate of the transient response is accurate
since the third pole is at least five times the real part of the dominant, second-order pair.
35
EEE Department /Control systems II/EE 469
The final results are displayed in Figure 6.12, which compares the uncompensated system
and the faster compensated system.
36
EEE Department /Control systems II/EE 469
close all
clc
S=tf('s');
A=1;
sys=A/(S*(S+4)*(S+6));
figure (1)
rlocus(sys)
A=43.35;
sys=A*(S+3.006)/(S*(S+4)*(S+6));
figure (2)
rlocus(sys)
sys_c=feedback(sys,1);
figure (3)
step(sys_c)
hold on
sys=A/(S*(S+4)*(S+6));
sys_c=feedback(sys,1);
step(sys_c,'r')
7 PID Controller
The PID controller “proportional-Integral-Derivative” consist of three terms of
controlling:
37
EEE Department /Control systems II/EE 469
a) Proportional action
Proportional feedback control can reduce error responses to disturbances, but the system
may have a steady-state offset or (drop) in response to a constant reference and may not
be entirely capable of rejecting a constant disturbance. In addition, proportional feedback
increases the speed of response but has a much larger transient over shoot.
b) Integral action
The primary reason for integral control is to reduce or eliminate constant steady-state
errors, but this benefit typically comes at the cost of worse transient response. By means
if the designer wishes to increase the dynamic speed of response with large integral gain,
then the response becomes very oscillatory. Away to avoid this behavior in some cases is
to use both proportional and integral control at the same time. In general, even though
integral control improves the steady-state tracking response, it has the effect of slowing
down the response if we keep the over shoot unchanged.
c) Derivative action:
Equation (7.1) displays the parallel continuous time PID algorithm as:
t
[
u ( t )=ú+ K c e ( t ) +
1
∫
τI 0
¿ ¿
e (t ) d t + τ D
dt ]
de (t)
(7.1)
where
u(t )→ the controller output.
ú → the steady state controller output or could be regarded as the nominal value of the
controller.
Kc → the controller gain (usually dimensionless ).
38
EEE Department /Control systems II/EE 469
39