0% found this document useful (0 votes)
196 views2 pages

Mathematical Control Theory

1) Mathematical control theory studies control systems and their analysis and design. It is divided into stochastic and deterministic systems. 2) There are three directions in mathematical control theory: optimal control, which studies optimizing system behavior; controllability and observability problems; and feedback control, which studies stabilizing systems using feedback. 3) Feedback control is concerned with existence and design of stabilizing feedback, and can recast tracking and observer problems. It is closely related to areas like stability theory, differential equations, and non-smooth analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
196 views2 pages

Mathematical Control Theory

1) Mathematical control theory studies control systems and their analysis and design. It is divided into stochastic and deterministic systems. 2) There are three directions in mathematical control theory: optimal control, which studies optimizing system behavior; controllability and observability problems; and feedback control, which studies stabilizing systems using feedback. 3) Feedback control is concerned with existence and design of stabilizing feedback, and can recast tracking and observer problems. It is closely related to areas like stability theory, differential equations, and non-smooth analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Mathematical Control Theory

According to E.D. Sontag, Mathematical Control Theory “is the area of application-oriented
mathematics that deals with the basic principles underlying the analysis and design of control systems”.
In other words, mathematical control theory studies the properties of control systems. Roughly
speaking a control system is a more general notion than the notion of a dynamical system and is an
“object” that changes with time and its behavior can be influenced by external parameters (inputs).
Control systems are divided into two groups, the stochastic control systems (systems for which the
future state is not completely determined by the present and the external parameters) and the
deterministic control systems (systems for which the future state is completely determined by the
present state and the external parameters).

The mathematician N. Wiener coined the term “Cybernetics” to refer to control theory and related
areas.

Directions of Mathematical Control Theory

There are three directions in Mathematical Control Theory:

1) The direction of Optimal Control: Studies the ability to optimize the behavior of a control system

2) The direction of Controllability and Observability Problems: Studies the controllability and
observability properties of a control system

3) The direction of Feedback Control: Studies the ability to stabilize a control system by means of
feedback control

Roughly speaking,

- in the direction of optimal control the control action is pre-computed and then it is applied to the
system in order to obtain the optimal behavior.

- in the direction of controllability and observability the problem of the existence of a control action
that can achieve certain objectives is studied (or the problem of the existence of a method that can
provide the state of a system based on limited information).

- in the direction of feedback control the control action is calculated and applied on-line (i.e., as the
system evolves) in order to obtain the desired behavior. Specifically, some variables of the system are
measured and the values of the measured variables determine the control action at each time through a
law, which is termed as “feedback law” (because schematically the measured variables are “fed back”
to the system).

The Direction of Feedback Control

It can be said that the direction of feedback control studies two major problems: the problem of
existence of a stabilizing feedback for a given control system and the problem of the design of
stabilizing feedback for a given control system. Many classical control problems can be recast as
feedback control problems. Particularly, this holds for the tracking control problem and the observer
problem.

It can be claimed that the direction of Feedback Control in Mathematical Control Theory is concerned
with the theory and mathematics of the area of Automatic Control (a very important area in
engineering). In fact, most of the technological achievements of the 20th century contain a control
mechanism, which was designed using results of the direction of Feedback Control in Mathematical
Control Theory. The direction of Feedback Control in Mathematical Control Theory is also related with
the applied areas of Signal Processing and Systems Theory.
Areas of Mathematics related to Feedback control

It should be clear that the direction of feedback control is closely related to

A) the Stability Theory (or Qualitative Theory) of Dynamical Systems and more specifically, is related
to:

- Comparison Principles

- Small-Gain Theorems

- Lyapunov Analysis

B) the area of Differential and Difference Equations (existence and uniqueness theory)

C) the areas of Non-Smooth Analysis and Set-Valued Analysis (which are rapidly growing areas)

Recently there are widespread applications of the direction of feedback control to:

- problems of numerical analysis

- differential (or difference) games

A student interested in the direction of feedback control should have a solid background on advanced
calculus, real and complex analysis, linear algebra and differential equations.

You might also like