0% found this document useful (0 votes)
50 views15 pages

Inno2024 Emt4203 Control II Notes r1

This document provides lecture notes for the course EMT 4203: Control Engineering II. The course covers classical control topics including control problems, control actions, state feedback control configuration, and output feedback control configuration. Specific control actions covered include proportional, derivative, integral, proportional-derivative, and proportional-integral control. The notes include examples, diagrams, and equations to illustrate key concepts in classical control theory.

Uploaded by

kabuej3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views15 pages

Inno2024 Emt4203 Control II Notes r1

This document provides lecture notes for the course EMT 4203: Control Engineering II. The course covers classical control topics including control problems, control actions, state feedback control configuration, and output feedback control configuration. Specific control actions covered include proportional, derivative, integral, proportional-derivative, and proportional-integral control. The notes include examples, diagrams, and equations to illustrate key concepts in classical control theory.

Uploaded by

kabuej3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

EMT 4203: CONTROL ENGINEERING

II

Bsc. Mechatronic Engineering

DeKUT

Lecture Notes

By

Dr. Inno Oduor Odira

January 2024
Table of contents

I Classical Control 3

1 Control Problem and Control Actions 5


1.1 Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Control actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 State feedback control configuration. . . . . . . . . . . . . . . . . . . . 7
1.2.2 Output feedback control configuration. . . . . . . . . . . . . . . . . . . 7

References 15
COURSE OULTLINE

EMT 4203: CONTROL ENGINEERING II


Prerequisites: EMT 4102 Control Engineering I

Course Purpose

The aim of this course is to enable the student to understand state space and polynomial representa-
tions of linear dynamical systems, develop skills for design of linear multivariable control systems
by pole placement and linear quadratic optimization and grasp basic nonlinear system analysis and
design methods

Expected Learning Outcomes

By the end of this course, the learner should be able to;

i) Analyse linear dynamical systems by state space and polynomial methods

ii) Design controllers for linear and nonlinear systems

iii) Derive state space representations for nonlinear systems

Course description

Control problem and basic control actions: Proportional (P) control actions, Derivative (D)
control action, integral (I) control action, Proportional plus Derivative (PD) control action, Pro-
portional plus Integral (PI) control action. Controllers based on PID actions, Observability and
state estimation. Pole placement by state feedback. Observer design by pole placement. Polyno-
mial approach for pole placement. State variable feedback controller design: controllability,
observability, eigenvalue placement, observe design for linear systems. Linear Quadratic Reg-
ulator (LQR), Algebraic Riccati Equation, (ARE), disturbance attenuation problem, tracking
problem, Kalman filter as an observer: digital implementation of state-space model, Dynamic
programming.
2 Table of contents

Mode of delivery

Two (2) hour lectures and two (2) hour tutorial per week, and at least five 3-hour laboratory
sessions per semester organized on a rotational basis.

Instructional Materials/Equipment

White Board, LCD Projector

Course Assessment

1. Practicals: 15%
2. Assignments 5%
3. CATs: 10%
4. Final Examination: 70%
5. Total: 100%

Reference books

1. Franklin, Gene F. Feedback control of Dynamic Systems. ( 2006. ) 5th ed. India : Prentice
Hall,
2. Golnaraghi, Farid. Automatic control systems. (2010.) 9th Ed. New Jersey: Wiley.
3. Gearing Hans P. Optimal Control with Engineering Applications. (2007.) Verlag Berlin
Heideberg : Springer,
4. Ogata, Katsuhiko. Modern Control Engineering. (2010.) 5th. Boston: Pearson.
5. Bolton W. Control Systems (2006.) -. Oxford: Newnes.

Course Journals

1. Journal of Dynamic Systems, Measurement, and Control, ISSN: [0022-0434]


2. IRE Transactions on Automatic Control, ISSN: [0096-199X]
3. SIAM Journal on Control and Optimization, ISSN: [0363-0129]
4. Transactions of the Institute of Measurement and Control, ISSN: [0142-3312]
Part I

(Classical Control)
Chapter 1

Control Problem and Control Actions

1.1 Control Problem


In any control system, where the dynamic variable has to be maintained at the desired set point
value, it is the controller which enables the requirement of the control objective to be met.
The control design problem is the problem of determining the characteristics of the controller
so that the controlled output can be:
1. Set to prescribed values called reference
2. Maintained at the reference values despite the unknown disturbances
3. Conditions (1) and (2) are met despite the inherent uncertainties and changes in the plant
dynamic characteristics.
4. Maintained within some constrains.
The first requirement above is called Tracking or stabilization (regulator) depending on whether
the set-point continuously changes or not, The second condition is called disturbance rejection.
The third condition is called Robust tracking/stabilization and disturbance rejection. The fourth
condition is called optimal tracking/stabilization and disturbance rejection.

1.2 Control actions


The liquid level control system in a buffer tank shown in Fig. 1.1 will be used for illustration. This
can be presented as a general plant shown Fig. 1.2. The manner in which the automatic controller
produces the control signal is called the control action.
The control signal is produced by the controller, thus a controller has to be connected to the
plant. The configuration may take either Close loop or Open loop as shown in Fig. 1.3 and 1.4
respectively. These may also be configured as either output feedback control configuration or State
feedback control configuration.
6 Control Problem and Control Actions

Fig. 1.1 Liquid level control system in a buffer tank

Fig. 1.2 General plant

Fig. 1.3 Close loop Controlled system.

Fig. 1.4 Open loop Controlled system.


1.2 Control actions 7

1.2.1 State feedback control configuration.


The general mathematical model of state feedback takes the form

x = Ax + Bu State Equation
y = Cx + Du Output Equation

The associated block diagram is the following Fig. 1.5. Two typical control problems of interest:

Fig. 1.5 Regulation and Tracking configuration.

• The regulator problem, in which r = 0 and we aim to keep limt→∞ y(t) = 0 (i.e) (i.e., a
pure stabilisation problem)
• The tracking problem, in which y(t) is specified to track r(t) ̸= 0.
When r(t) = R ̸= 0, constant, the regulator and tracking problems are essentially the same.
Tracking a nonconstant reference r(t) is a more difficult problem, called the servomechanism
problem.
The control law for state feedback the takes the form

u(t) = K2 r − K1 X

1.2.2 Output feedback control configuration.


Controller compares the actual value of the system output with the reference input (desired value),
determines the deviation, and produces a control signal that will reduce the deviation to zero or a
small value as illustrated in Fig. 1.6.
The algorithm that relates the error and the control signal is called the control action (law)(strategy).
A controller is required to shape the error signal such that certain control criteria or specifica-
tions, are satisfied. These criteria may involve :
• Transient response characteristics,
• Steady-state error,
• Disturbance rejection,
• Sensitivity to parameter changes.
8 Control Problem and Control Actions

Fig. 1.6 Expanded Close loop Output feedback configuration.

Fig. 1.7 Control action.

The most commonly used Control Actions are :

1. Two position (on-off, bang-bang)

2. Proportional (P-control)

3. Derivative (D-control)

4. Integral (I-control)

Two position (on-off, bang-bang)

In a two position control action system, the actuating element has only two positions which are
generally on and off. Generally these are electric devices. These are widely used as they are simple
and inexpensive. The output of the controller is given by Eqn.1.1.
( )
U1 : ∀e(t) ≥ 0
u(t) = . (1.1)
U2 : ∀e(t) < 0
Where, U1 and U2 are constants
The block diagram of on-off controller is shown in Fig. 1.8
The value of U2 is usually either:
• zero, in which case the controller is called the on-off controller (Fig. 1.9), or
• equal to −U1 , in which case the controller is called the bang-bang controller Fig. 1.10.
Two position controllers suffers cyclic oscillations which is mitigated by introduction of a
differential gap or Neutral zone such that the output switches to U1 only after the actuating error
1.2 Control actions 9

Fig. 1.8 Block diagram of on off controller.

Fig. 1.9 Block diagram of on off controller.

Fig. 1.10 Block diagram of on off controller.

becomes positive by an amount d. Similarly it switches back to U2 only after the actuating error
becomes equal to −d.

Fig. 1.11 Block diagram of on off controller.

The existence of a differential gap reduces the accuracy of the control system, but it also
reduces the frequency of switching which results in longer operational life.
With reference to Fig. 1.12, Assume at first that the tank is empty. In this case, the solenoid
will be energized opening the valve fully.
10 Control Problem and Control Actions

Fig. 1.12 water level control system.

If, at some time to, the solenoid is de-energized closing the valve completely, qi = 0, then the
water in the tank will drain off. The variation of the water level in the tank is now shown by the
emptying curve.

Fig. 1.13 water level control system.

If the switch is adjusted for a desired water level, the input qi will be on or off (either a positive
constant or zero) depending on the difference between the desired and the actual water levels to
create differential gap.
Therefore during the actual operation, input will be on until the water level exceeds the desired
level by half the differential gap.
Then the solenoid valve will be shut off until the water level drops below the desired level by
half the differential gap. The water level will continuously oscillate about the desired level.
It should be noted that , the smaller the differential gap is, the smaller is the deviation from the
desired level. But on the other hand, the number of switch on and offs increases.
1.2 Control actions 11

Fig. 1.14 water level control system.

Fig. 1.15 water level control system.

Proportional Control Action

The proportional controller is essentially an amplifier with an adjustable gain. For a controller with
proportional control action the relationship between output of the controller u(t) and the actuating
error signal e(t) is

u(t) = K p e(t)

Where, K p is the proportional gain.


Whatever the actual mechanism may be the proportional controller is essentially an amplifier
with an adjustable gain. The block diagram of proportional controller is shown in Fig.1.16.
The proportional action is shown Fig. 1.17 In general,
• For small values of Kp, the corrective action is slow particularly for small errors.
• For large values of Kp, the performance of the control system is improved. But this may lead
to instability.
Proportional control is said to look at the present error signal.
Usually, a compromise is necessary in selecting a proper gain. If this is not possible, then
proportional control action is used with some other control action(s).
The value of K p should be selected to satisfy the requirements of
12 Control Problem and Control Actions

Fig. 1.16 Block diagram of a proportional controller.

Fig. 1.17 Proportional action.

• stability,
• accuracy, and
• satisfactory transient response, as well as
• satisfactory disturbance rejection characteristics.

Integral Control Action

The value of the controller output u(t) is changed at a rate proportional to the actuating error signal
e(t) given by Eqn.4

du(t)
= Ki e(t) (1.2)
dt
or Z t
u(t) = Ki e(t) (1.3)
0
Where, Ki is an adjustable constant.
With this type of control action, control signal is proportional to the integral of the error signal.
It is obvious that even a small error can be detected, since integral control produces a control
signal proportional to the area under the error signal.
1.2 Control actions 13

Hence, integral control increases the accuracy of the system.


Integral control is said to look at the past of the error signal.
If the value of e(t) is doubled, then u(t) varies twice as fast. For zero actuating error, the value
of u(t) remains stationary. The integral control action is also called reset control. Fig.4 shows the
block diagram of the integral controller.
Remember that each s term in the denominator of the open loop transfer function increases the
type of the system by one, and thus reduces the steady state error.
The use of integral controller will increase the type of the open loop transfer function by one.

Fig. 1.18 Block diagram of an integral controller.

Fig. 1.19 Integral control action.

Derivative Control Action

In this case the control signal of the controller is proportional to the derivative (slope) of the error
signal.
Derivative control action is never used alone, since it does not respond to a constant error,
however large it may be.
Derivative control action responds to the rate of change of error signal and can produce a
control signal before the error becomes too large.
14 Control Problem and Control Actions

Fig. 1.20 Block diagram of a Derivative controller.

Fig. 1.21 Integral control action.

As such, derivative control action anticipates the error, takes early corrective action, and tends
to increase the stability of the system.
Derivative control is said to look at the future of the error signal and is said to apply breaks to
the system.
Derivative control action has no direct effect on steady state error.
But it increases the damping in the system and allows a higher value for the open loop gain K
which reduces the steady state error.
Derivative control, however, has disadvantages as well.
It amplifies noise signals coming in with the error signal and may saturate the actuator.
It cannot be used if the error signal is not differentiable.
Thus derivative control is used only together with some other control action!

You might also like