0% found this document useful (0 votes)
77 views

CS Chapter 8

This document discusses the analysis of nonlinear systems. It begins by stating that while most physical systems are nonlinear, linear analysis can be used to approximate them over certain operating ranges. However, some systems are highly nonlinear and require special techniques. Two such techniques are discussed: phase plane analysis and describing function analysis. Phase plane analysis uses graphical techniques to understand transient behavior and stability by plotting trajectories. Describing function analysis approximates nonlinear systems using harmonic linearization. Several common types of nonlinearities are also described, including saturation, friction, backlash, dead zones, and relays.

Uploaded by

Mynam Meghana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

CS Chapter 8

This document discusses the analysis of nonlinear systems. It begins by stating that while most physical systems are nonlinear, linear analysis can be used to approximate them over certain operating ranges. However, some systems are highly nonlinear and require special techniques. Two such techniques are discussed: phase plane analysis and describing function analysis. Phase plane analysis uses graphical techniques to understand transient behavior and stability by plotting trajectories. Describing function analysis approximates nonlinear systems using harmonic linearization. Several common types of nonlinearities are also described, including saturation, friction, backlash, dead zones, and relays.

Uploaded by

Mynam Meghana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

Nonlinear Systems

Analysis

EE3L003 CONTROL SYSTEMS


N. C. Sahoo
Nonlinear Systems
• Most physical systems are inherently nonlinear to
some extent. But the analysis of these systems can
be done using linear control theory over certain
operating range and can also yield powerful results.
• However some systems are sufficiently nonlinear
such that the important features of their performance
may be completely ignored if they are analyzed
through linear techniques.

• For such systems, some special analytical and


graphical techniques are employed by taking system
nonlinearities into account.
Behavior of Nonlinear Systems:
Superposition Theorem: Most fundamental property
of linear systems
• A linear system, designed to perform
satisfactorily when excited by a standard test
signal, will exhibit satisfactory behavior under
any circumstances.
• Amplitude of the test signal is not important for
linear systems since any change in input signal
amplitude results in change of response scale
with no change in basic response
characteristics.
For nonlinear systems, the principle of superposition
no longer holds. Thus, the response of nonlinear
systems to a particular test signal is no guide to their
behaviour to other inputs. Also, the nonlinear system
may be highly sensitive to input amplitude.
 Laplace transforms are no longer applicable for
nonlinear systems.
The stability of linear systems is determined solely by
the system poles and is independent of whether the
system is driven or not. Also, the stability of undriven
linear systems is independent of the magnitude of the
finite initial state.
The situation is not so clear for nonlinear systems.
The stability is very much dependent on the input and
also on the initial state.
 Also the nonlinear systems may exhibit limit
cycles which are self-sustained oscillations of
fixed frequency and amplitude.
 Determining the existence of limit cycles is not an
easy task as these may depend on both type and
amplitude of the excitation signal.
Even for a stable nonlinear system, the transient and
frequency response may possess some peculiar
features not found in linear systems.
Example:

System Spring characteristics


M x  f x  kx  F cos t [Assuming linear components]

Frequency
response curve
• Now let the spring be a nonlinear element.
Restoring spring force  k1x  k2 x
3

k2  0 : Linear spring ; k2  0 : Hard spring


k2  0 : Soft spring
Now, M x  f x  k1 x  k2 x  F cos t
3

New frequency response Duffing Equation

Hard spring case Soft spring case


The presence of nonlinear term k2 x (k2  0)
3

has caused the resonant peak to bend towards


higher frequencies.
 As the input frequency is gradually increased
from zero, holding fixed input amplitude, the
response follows the curve through the points A,
B, and C; but at C, an increment in frequency
results in a discontinuous jump down to point D.
After this, the response curve follows through
DE with further increase in frequency.
 If the frequency is now decreased, the response
follows the curve EDF with a jump up to B
occurring at F and the response curve moves
towards A.
 Thus, in certain range of frequencies, the
response function is double-valued.
 This phenomenon is called jump resonance.
For the soft spring case, the resonant peak bends
towards lower frequencies and a similar jump
resonance is seen.
For this system, if the forcing function is removed, and
the unforced system is activated by initial conditions
only, the response results in damped oscillations.
• Linear spring ( k2  0)
:The amplitude of successive peaks decreases
and the damped frequency of oscillation remains
unchanged.
• Nonlinear spring:
In this case, the decrease in amplitude comes with an
increase in frequency for k2  0 and a decrease in
frequency for k2  0. As the amplitude of oscillation
3
goes to zero, the k2 x becomes negligible and the
frequency of oscillation tends towards a constant value.

Decreasing amplitude x

 Response of nonlinear system depends on amplitude.


Investigation of Nonlinear Systems:
 These phenomena of nonlinear systems cannot be
explained by linear theory. No single mathematical
tool (like Laplace transforms for linear systems)
exists which can analyze all the nonlinear system
phenomena.
 Results derived for a class of nonlinear systems
cannot be extended to others.
 Because of this difficulty, one first considers the
possibility of approximating a nonlinear system by a
linear model.
 This type of approximation is valid only if the
operation is in a restricted range around the
operating point.
Example:
The nonlinear spring with restraining force k1x  k2 x
3

may be approximated by a linear one if k2 x3  k1x is


satisfied over the entire range of x .
Moreover, many physical systems are also nonlinear
even within a restricted range.
Example: Dry friction nonlinearity (piecewise linear
approximation)
 For this element, no single straight line can
approximate such a curve over the normal range of
speeds.
 However, piecewise linearization can be used and
linear system theory can be applied. This system is
called a piecewise linear system.
For more complicated nonlinear systems, piecewise
linearization is cumbersome and time consuming.
For such cases, two analysis methods are used.

a)Phase plane method


b)Describing function method
Phase Plane Method:
 This is a graphical method from which information
about transient behavior and stability can be
obtained by constructing phase plane trajectories.
 Mostly used for 2nd order systems with good
accuracy.
Describing Function Method:
 This method is based on harmonic linearization. The
input to the system is assumed to be sinusoidal and
depending on the filtering properties of the linear part
of the overall system, the output is adequately
represented by the fundamental frequency term of
the Fourier series.
 This approximation is not restricted to small signal
dynamics.
Common Physical Nonlinearities
Nonlinearities can be classified as incidental and
intentional.
 Incidental nonlinearities are inherently present in the
systems. Examples are: saturation, dead-zone,
coulomb friction, backlash etc.
 Intentional nonlinearities are deliberately inserted in
the system to modify the system characteristics.
Example: Relay.
a)Saturation:
Many practical systems, when driven by sufficiently
large signals, exhibit this phenomenon due to
limitations of physical capabilities of their
components.
b) Friction:
Retarding frictional forces exist whenever
mechanical forces come in sliding contact. The
predominant frictional force called the viscous friction
is proportional to the relative velocity of the sliding
surfaces.
Viscous friction force  f x
( f is a constant, x is the relative velocity)
There exist two nonlinear frictions, i.e., coulomb
friction which is a constant retarding force (always
opposing the relative motion) and stiction which is the
force required to initiate the motion.
 The force of stiction is always more than that of
coulomb friction since due to interlocking of surface
irregularities, more force is required to move an
object from rest than to maintain it in motion.

 In practice, the stiction force gradually decreases


with velocity and changes over to coulomb friction
at reasonably low velocities.
c) Backlash:
 Hysteresis nonlinearity in mechanical transmission
(gear trains and linkages). Backlash occurs between
the teeth of the drive gear and the driven gear.

 For a given input, the output is multivalued. Which


particular output will result for a given input depends
on the history of the input.
 This nonlinearity has inherent memory. Thus it is
also referred to as memory type nonlinearity.
 The width of input-output curve equals the total
backlash.
 In a servo system, the gear backlash may cause
sustained oscillations or chattering phenomenon.
Backlash can be reduced by high quality gears.
d) Dead-zone:
e) Relay:
 A relay is a nonlinear power amplifier which can
provide large power amplification with less cost and is
sometimes intentionally introduced in control systems.
 A relay-controlled system can be switched abruptly
between several discrete states which are usually off,
full forward and full reverse.
 In practice, a relay has some definite dead-zone. It is
caused by the fact that the relay coil requires a
certain amount of current to actuate the relay.
 Also, since a larger coil current is needed to close the
relay than the current at which the relay drops out,
the relay characteristics has also some hysteresis.
Phase-Plane Analysis
Phase-Plane Analysis Method
 Basic Concepts:
Consider an unforced linear mass-spring-damper
system.
d 2x dx
M 2 f  kx  0
dt dt
Let this system be activated by initial conditions
only. The dynamics of the system is of the form
d 2x dx
2
 2 n   nx 0
2

dt dt
dx
States of the system: x1  x and x2 
dt
The state variables (defined in this manner) are
called phase variables.
dx1 dx2
Thus:  x2  n2 x1  2n x2
dt dt
Responses and phase-trajectories for various dampings
x
for 1 (0)  x1 and x2 (0)  0 are shown.
0

-X2

-X2

-X2
• When the differential eqs. describing the dynamics
are nonlinear, it is in general not possible to obtain a
closed form solution of x1 and x2
(Example: Mass-nonlinear spring-damper system).

 In such situations, the phase-plane method is helpful.


• Coordinate plane with axes corresponding to the
dependent variable x1  x and its first derivative x2  x ,
is called the phase-plane.
The curve described by the state point ( x1 , x2 ) in the
phase-plane with time as running parameter is called
a phase-trajectory.
Example: Typical Trajectories of a damped linear
system for different initial conditions are shown below.
Such a family of trajectories is called a phase-portrait.

 For LTI system, the entire phase-plane is covered


with trajectories with only one curve passing through
each phase-plane point except for certain points
through which either infinite number or no trajectories
pass. Such points are called singular points.
 If the parameters of the system vary with time or if a
time-varying driving function is imposed, two or more
trajectories may pass through a single point in phase-
plane. In such cases, the phase-portrait becomes
involved and more difficult to interpret.

Thus, the use of phase-plane method is restricted to


2nd-order systems with constant parameters and
constant or zero input.
Singular Points
A general time-invariant system’s state Eq. :
x  f(x,u)
If the input vector u is constant, it is possible to write
this eq. in the form
x  F(x)
 A system represented in this form is called an
autonomous system.
For such a system, consider the points in phase-plane
at which the derivatives of all state variables are zero.
Such points are called singular points. These are the
equilibrium points. If the system is placed at such a
point, it will continue to stay there if left undisturbed.
For studying system dynamics at an equilibrium
(singular) point to small perturbation, the system is
linearized at that point. The linearized model is:

x  Ax
For this linear autonomous system, the equilibrium
state xe satisfy Axe  0
If A  0 , then xe  0 is the only solution.

In general, if xe is a singular point, it is convenient to


shift the origin of the coordinates to xe.Thus, the new
phase variables are defined as:
x  x  xe
Also, x  F (x) with the equilibrium point lying at x  0 .
Example: A linear autonomous 2nd order system.
After modal transformation: x=Mz M  Modal matrix
Canonical form:
Assuming distinct eigenvalues 1 and 2 ,
  z  z
 z1    0   z1 
1 1 1

  1  
   0  2 
 z 2  z  z
 z 2 
2 2 2

N.B: The transformation changes the coordinate


system from ( x1 , x2 ) to ( z1 , z2 ) having the same origin.
• The trajectory traced out by the representative point
P in ( z1 , z2 ) plane has a resultant velocity vector.
2
2 z2
dz2
1
tan   dt 
dz2
 z2  c( z1 )
dz1
dt
dz1
1 z1
c  constant of integration.
A plot of this equation in ( z1 , z2 ) plane gives the phase-
trajectory.
Nodal Point:
Example: A linear 2nd order system whose
eigenvalues are real, distinct and negative.

Stable Node
It is obvious (from previous analysis) that the singular
point is located at the origin.
2
Phase-trajectory: z2  czk1
k1   0
1
1
The phase-portrait is shown for various initial conditions.

Stable Node Unstable Node


 A singular point of this type is called a stable node.
The trajectories of state point travels towards origin
for any initial state.
When roots are real and distinct but lie on the right-
half of s-plane, the phase-portrait is also shown.
Such a singular point is an unstable node.
Saddle Point:
The eigenvalues for the example system with the
phase-trajectories are shown.

 k2 2
z2  c( z1 ) k2    0
1 ( z ) k2
z2  c
1

 For various initial conditions, the phase trajectories


are hyperbolas with directions as shown.
z1  z1 (0)e 1t
and z2  z2 (0)e2t
Saddle Point
Focus Point:
System with complex conjugate eigenvalues.
1, 2    j
 
z
 
1   j 0   z1

   0   
  j   z2 
z2 
1 j
 
 z1   2 2  y1 
Transformation:  z2    1 j   y2 
 
2 2 

 y1        y1 
 y       y2 
 2
y1   y1   y2 and y 2   y1   y2
dy2 y2  ky1
 ;k  
dy1 y1  ky2 
y2
Define: r y y
2 2
1
2
2 and  tan 
y1
t
Then: r (t )  ce
and  (t )  t  0
where c is a constant and  (0)   0
This is the equation of a spiral. Two plots of this eq., i.e.,
negative values of  (stable focus) and positive values
of  (unstable focus) are shown.
y2 y1

y1

Stable Focus
y2
y1

y1

Unstable Focus
Centre or Vortex Point:
System eigenvalues: 1 , 2   j
dy2 y1
 (from previous section analysis)
dy1 y2
y1dy1  y2dy2  0
Solution: y  y2  c  Circle.
2 2 2
1
c depends on the initial conditions. This type of
singular point is known as a center or vortex point.
N.B: When one or both eigenvalues are zero, there will
be many singular points.
Example: Say 1  0
Then z1  0 and z2  2 z2
Thus every single ( z1 , z2 )  (c,0) is an equilibrium state
where is c an arbitrary constant.
Example: Nonlinear PD controller for 2nd order system
• With derivative controller, the system damping
increases and setting time reduces ( K v maintained).
 However, for a low rise time requirement, the design
specifications become contradictory.
 For such a scenario, nonlinearity is intentionally
introduced for advantage in meeting the contradictory
requirements of low rise time and low setting time.
• This can be done by making the derivative term
dependent on error magnitude. By suitably
designing the nonlinear function f  e  , the derivative
term kD f  e  e can be kept low under large error
conditions giving smaller rise time. This term can
also be made larger for small errors to reduce the
settling time.
System Eqs: e  K f  e  e  K   e e er c
 
D  v
If r = constant, e   c and e   c
So, e 1  Kf  e   e K ve  0 where K=K D K v

State variables: x1  e x2  e
Kv 1  Kf ( x1 )
So, x1  x2 x2   x1  x2
 
Let x1  x2  0
The only equilibrium point is at: x1  0 and x2  0
By linearization:
 0 1 
A   Kv 1  Kf (0) 
x  Ax   
   
Eigenvalues of A are:
1  Kf (0)  1 1  Kf (0)  Kv
2

1 , 2        
 2  4   
N.B: A desirable design feature can be that the
system response rises and settles fast without
oscillations. For this, the equilibrium point must be a
stable node.

* For f  0   0 , stable node  1, 2  0


 2 K v  1
is obtained by the condition: f  0    
 K 
The stable node requirement fixes the point f  0  on
the f  e  function.
 In case, the above inequality is reversed, i.e.,
 2 K v  1
f  0   
 K 
1 , 2 are complex conjugates with negative real
part. Thus, the equilibrium point will exhibit stable
focus behavior.
Stability of Nonlinear
Systems
Stability of Nonlinear Systems
Stability for LTI Systems:
 For free systems: A system is stable with zero
input and arbitrary initial conditions if the
resulting trajectory tends towards the
equilibrium state.
 For forced systems: A system is stable if with
bounded input, the system output is bounded.

These two notions are essentially equivalent for


linear systems (LTI). But, for nonlinear systems,
there is no definite correspondence between
the two notions.
Stability Study of Nonlinear Free Systems:
 Linear autonomous systems (with nonzero
eigenvalues) have only one equilibrium state and
their behavior about the equilibrium state completely
determines the qualitative behavior in state plane.
 In nonlinear systems, the behavior for small
deviations about the equilibrium point may be
different from that for large deviations.
 Thus, local stability may not imply stability in the
entire state plane.
 Moreover, for a nonlinear system with multiple
equilibrium states, the system trajectory may move
from one equilibrium point to the other as time
advances.
There are broadly three stability definitions for
nonlinear systems.
a)Stability
b)Asymptotic stability
c) Asymptotic stability in-the-large

Consider the autonomous system: x = F(x)
Assumption: The system has only one equilibrium point.
Normally, this happens for a well-designed system.
Without loss of generality, the state space origin be
taken as the equilibrium point.

• The system is stable at the origin if, for every initial


state x  t0  sufficiently closer to the origin, x  t 
remains near the origin for all t.
• It is asymptotically stable if x  t  approaches the
origin as t 
• It is asymptotically stable in-the-large, if it is
asymptotically stable for every initial state
regardless of how near or far it is from the origin.
More Precise Mathematical Definitions:
• The system is stable at the origin if, for every real
number   0 , there exists a real number  ( )  0 such
that x  t0    results in x  t    for all t  t0 .
1
x  ( x12  x22   x )  Euclidean norm
2 2
n

x R is a hyper-spherical region S  R  of radius R


surrounding the equilibrium point x  0 .
• This stability definition implies that, for any S   ,the
designer should define S   so that the state initially
in S   will never leave S   .

• A system is said to be locally stable (stable in-the-


small) if the region S   is small.
 The system is asymptotically stable at the origin if
a) It is stable, and
b) There exists a real number r  0 such that x  t0   r
results in x  t   0 as t .
Thus, every motion starting in S  r  converges to the
origin as t  .
 The system is asymptotically stable in-the-large
(globally asymptotically stable) at the origin if
a) It is stable, and
b) Every initial state x  t0  results in x  t   0 as t 

Hence, this stability guarantees that every motion


will approach the origin.
Limit Cycle
 These stability definitions are defined in terms of a
disturbed steady state coming back to its equilibrium
position or at least staying within tolerable limits from
it.
 This includes the possibility that a disturbed nonlinear
system even while staying within tolerable limits, may
exhibit a special behavior of following a closed
trajectory called limit cycle.
 The limit cycles describe oscillations in nonlinear
systems.
Example: Van der Pol’s Differential Equation
d 2x 2 dx
2
  (1  x )  x  0
dt dt
 Comparison with the linear system:
2
d x dx
2
 2 x0
dt dt
• Thus Van der Pol’s system has a damping factor

 (1  x )2

2
If x 1 initially, the damping factor is large positive.
The system thus behaves like an overdamped system
(it decreases x ). In this process, the damping factor
also decreases and the state finally enters a limit cycle
as shown (outer trajectory).
• Similarly, if x 1 initially, the damping is negative.
Hence x increases till the state again enters the limit
cycle (inner trajectory).
 This limit cycle is stable since the paths in its
neighborhood converge toward the limit cycle.
 If the paths in the neighbourhood of a limit cycle
diverge away from it, the limit cycle is unstable.
Example (unstable limit cycle):
2
d x 2 dx
(Sign of damping term
2
  (1  x )  x  0 is reversed)
dt dt
N.B: In general, limit cycle is an undesirable feature in
a control system. This is tolerable only if its amplitude
is within some defined limits.
In linear autonomous systems, when oscillations
occur, the resulting trajectories will be closed curves.
The amplitude of oscillations (not fixed) changes with
the size of initial conditions. Slight changes in system
parameters (shifting the eigenvalues from the
imaginary axis) will destroy the oscillations.

In nonlinear systems, there can be oscillations that


are generally independent of the size of the initial
conditions and these oscillations (limit cycles) are
usually much less sensitive to the system parameter
variations.
Limit cycles of fixed amplitude and period can be
sustained over a finite and sufficiently large range of
system parameters.
(a) Stable limit cycle
(b) Unstable limit cycle
(c) Semi-stable limit cycle
Describing Function
Method
Describing Functions
Phase plane method is generally restricted to 2nd
order systems only. Describing function method can
be used for analysis of a wider class of systems.
Basic Concepts:

Assumption: Input x to the nonlinear element is


sinusoidal.
x  X sin  t
The output of the nonlinear element will be a
nonsinusoidal periodic function.
y  A0  A1 sin t  B1 cos t  A2 sin 2t  B2 cos 2t 
• Further, if the nonlinearity is assumed to be
symmetrical, the average value of y  0 .

So, y  A1 sin t  B1 cos t  A2 sin 2t  B2 cos 2t 

In the absence of external input (r  0 ), the output of N


is fed back to its input through G 2 (s) and G1 (s) in tandem.

• If G 2 (s) and G1 (s) have low-pass characteristics (usual


scenario), it may be assumed that all the harmonics
of y are filtered out in the process such that x  t  is
mainly contributed by the fundamental components
of y , i.e., remains sinusoidal under such conditions.
Thus the harmonic content of y can be thrown away
for the purpose of analysis, i.e.,
y  A1 sin t  B1 cos t  Y1 sin(t  1 )
This procedure heuristically linearizes the
nonlinearity. This type of linearization is valid for
large signals as long as the harmonic filtering
condition is satisfied.
• Under these assumptions, the nonlinearity can be
replaced by a describing function K N (X, ) defined as:
Y1
K N (X,  )  1
X
when the input to the nonlinearity is: x  Xsin t
 Thus, the describing function is in general dependent
upon the amplitude and frequency of the input. Now,
all the frequency domain techniques of linear system
theory can be applied.
N.B: Because of the assumption of low-pass behavior
of the linear elements in the system, the describing
function method is mainly used for stability analysis.

Also, though it is a frequency domain approach, no


general correlation is possible between time and
frequency responses.
Derivation of Describing Functions:
x  X sin t y1  A1 sin t  B1 cos t  Y1 sin(t  1 )

1 2


A1  y sin t d (t )
0

1 2


B1  y cos t d (t )
0

 B1 
Y1  A  B , 1  tan  
1
2 2
1
1

 A1 
Y1
K N (X,  )  1
So, X
Example: Dead-zone and saturation nonlinearity

Sinusoidal Response
 0 , 0  t  

 K  x  D  ,   t  
  2
 D
  D , where   sin 1 and
y   K  S   ,   t  (   ) 2X
  2
  D S
 K x   ,(   )  t  (   )   sin 1

  2 X
 0 , (   )  t  

The output has half-wave and quarter-wave symmetries.


So,
4 2
B1  0 and A1  0 y sin t d (t )


4   D  D 
 A1    K  X sin t   sin t d (t )   K  S   sin t d (t ) 
2
  2 
 2 
KX
  2        sin 2  sin 2  

 Thus the describing function is:
 D 
 0 , X  ;    
2
2

K N (X) 
1    sin  cos   , X   S ;   
2 D
=
K   2 2
1
   2        sin 2  2   , X  S
Special Case #1:
D
Saturation nonlinearity:  0,  0
2


 1 ,X  S

K N (X)  2
=    sin  cos  
K  
  2
 sin      1    
2   1  S   S   S 
, X > S
     
X X   
X
  
Saturation
nonlinearity

Special Case #2:


Dead-zone nonlinearity: S  ,    2

 D
 0 ,X 
 2
K N (X)  2
= 1    sin  cos  
K  
  2 
 1  
2
sin 1  D 
 
 
 D 
 1 
 D




 ,X >
D
   2X   2X   2X   2
  
Dead-zone
nonlinearity

The describing functions of saturation and dead-zone


nonlinearities are frequency-invariant having zero
phase shift.
 All nonlinearities, whose input-output characteristics
are represented by a planar graph, have describing
functions independent of frequency, but amplitude
dependent.
 An element described by a nonlinear differential
equation has both frequency and amplitude
dependent describing functions.

 A frequency-invariant describing function having zero


phase shift will be produced by a memoryless
nonlinearity whose output is independent of history of
the input (Example: saturation and dead-zone).
Example: Relay with dead-zone and hysteresis
 0 ,0  t  
 M ,  t  (   )

y 0 ,(   )  t  (   )
 M ,(   )  t  (2   )

 0 ,(2   )  t  2

2  2M   H 
B1   y cos t d (t )   
 0   X 
2M
  sin   sin  

2  2M
A1 
  0
y sin t d (t ) 

(cos  cos  )


2M   D 
2 2
 D  2H  
  1    1   
   2X   2X  
 
 2

2
B1 2M   H  and A1 2M



 D 
 
 D 2 H  
   X X
 1   1   
X X  X  
  2X   2X  

 D
 0 ,X 
 2
Thus, K N ( X ) =  2 2
  A1    B1  tan 1 B1 ,X 
D
  X   X  A1 2

• This K N ( X ) is independent of frequency. But, it being
a memory type nonlinearity (i.e., output is dependent
on input history), K N has both magnitude and angle
(lagging angle is equivalent to having effect of pole
in linear system).
Case 1: Ideal Relay: D  H  0
4M
KN ( X ) =
X
Case 2: Relay with dead zone: H  0
 D
 0 ,X 
D  2
KN ( X )   2
M  4D 1   D  , X  D
  X  
 2 X  2
Case 3: Relay with hysteresis: H  D
 H
 0 ,X 
H  2
KN ( X )  
M  4 H  sin 1 H ,X 
H
  X 2X 2

Case 1 Case 2 Case 3


N.B: When two nonlinearities are placed in tandem, the
resultant describing function cannot be obtained by
multiplying the describing functions of the individual
nonlinearities.

 In such cases, one must first combine the


nonlinearities into a composite one and then find the
describing function of the composite nonlinearity.
Stability Analysis by Describing Function Method:

The major use of describing functions is in stability


investigations and prediction of limit cycles.

Characteristic Eq: 1  G1 ( j )G2 ( j ) K N ( X ,  ) = 0


 Characteristic Eq. is independent of physical location
of the nonlinearity in the system.
According to Nyquist stability criterion, the system will
exhibit sustained oscillations (or limit cycles) when
G1 ( j )G2 ( j ) K N ( X ,  ) = 1
• This implies that the polar plot of G1 ( j )G2 ( j ) K N ( X ,  )
passes through the critical point 1  j 0 .
Thus, the condition can be modified as:
1
G1 ( j )G2 ( j ) =
K N ( X , )
• This modified condition differs from the original
condition in the fact that now the frequency response
of linear part G1 ( j )G2 ( j ) is plotted, while the critical
point 1  j 0 becomes the critical locus 1 K N ( X , ) .
The intersection of G1 ( j )G2 ( j ) plot with the critical
locus determines the amplitude and frequency of the
limit cycle.
Example:

G( j ) plot is superimposed on 1 K N ( X ) plot, where


the describing function is frequency-invariant. The
system is assumed to be open-loop stable and the
closed-loop stability is investigated.
Values of X for which 1 K N ( X ) locus is enclosed by G( j )
plot, i.e., 1 K N ( X ) locus lies in the region to the right of
an observer traversing the G( j ) plot for positive
frequencies in the direction of increasing ,corresponds
to unstable conditions.

Similarly, values of X for which 1 K N ( X ) locus is not


enclosed by G( j ) plot, i.e.,1 K N ( X ) locus lies in the
region to the left of an observer traversing the G( j ) plot
for positive frequencies in direction of increasing 
corresponds to stable conditions.
1 K N ( X ) locus and G( j ) plot intersect at (  2 , X  X 2 )
which corresponds to the condition of limit cycle (self-
sustained oscillations). The system is unstable for X < X 2
and is stable for X  X 2 .
The stability of limit cycle can be assessed by
perturbation technique.
• Suppose the system is originally operating at A
under the state of a limit cycle. Assume that a slight
perturbation is given to the system so that the input
to the nonlinear element increases to X 3 (operating
point is shifted to B). Since B is in the range of
stable operation, the amplitude of the input to the
nonlinear element progressively decreases and
hence the operating point moves back towards A.
• Similarly, the opposite phenomenon happens if a
perturbation shifts the operating point to C, which
lies in the unstable region. Thus the operating point
again returns back to A.

Few cases for prediction and stability of limit cycles


• For systems having frequency dependent describing
functions, the stability analysis method is basically
the same, but 1 K N ( X ,  ) has infinitely many loci one
for each values of  .The limit cycles are determined
by the intersections of 1 K N ( X ,  ) loci and G( j ) plot.
Only such intersections qualify for a limit cycle for
which the loci and plot have a common value of  .

• If an intersection point belongs to different values of


for G( j ) plot and 1 K N ( X ,  ) loci, it doesn’t contribute
to a solution of modified characteristics Eq. and thus
does not determine a limit cycle.
Example: A relay-controlled system

Ideal Relay: 1 
For the relay:  E
K N (E) 4
(E  amplitude of sinusoidal error signal e )
• Consider G( j ) plot for K  K1which intersects 1 K N ( E )
locus at A resulting in a limit cycle of amplitude E1
and frequency 1 . As an observer traverses G( j )
plot in the direction of increasing  the portion OA of
the 1 K N ( E ) locus lies to its right and the portion AC
lies to its left. Thus, this limit cycle is stable.

• As the gain is increased to K 2 ( K1 ) the intersection


point shifts to B resulting in a limit cycle of
amplitude E 2 (> E1 ) and frequency   2 .

This system has a limit cycle for all positive values


of gain.
Since for the ideal relay, 1 K N ( E )locus is the negative
real axis, the frequency of limit cycle can be determined
from G( je )  180 , e  limit cycle frequency.

• Knowing e ,the amplitude of limit cycle can be found


from
1 
 E = G( je )
K N (E) 4
Stability Analysis by
Lyapunov’s Method
Lyapunov Stability Criterion
x = f  x(t ), u(t ), t 
Analytical solution of this is rarely possible. If a
numerical solution is tried, the stability behavior
cannot be fully addressed as solution for infinite set of
initial conditions are required.
Lyapunov’s method is one rigorous method for stability
investigations.
Basic Stability Theorems:

 Lyapunov’s method is based on the concept of


energy and the relation of stored energy to system
stability.
x(t) = f  x(t )   autonomous system.
and x  x(t0 ), t  is a solution.
V  x   total energy associated with the system


• If the derivative dV  x  dt is negative for all x x(t0 ), t 
except the equilibrium point, then it follows that the
energy of the system decreases as t increases.
Finally the system will reach equilibrium point.

 This is true because energy is a non-negative function


of system state, which reaches a minimum only if the
system motion stops.
Example:

x1  f x1  Kx1  0
State model: x1  x2 x2   Kx1  fx2

• At any instant, the total energy V in the system


consists of kinetic energy of the moving mass and
potential energy stored in the spring.
1 2 1 2
V  x1, x2   x2  Kx1
2 2
So, V  x   0 when x0
and V  0  0
• This means that the equilibrium state xe  0
where the energy is zero.
d V dx1 V dx2
V  x1 , x2      fx
2
dt x1 dt x2 dt 2

So dV dt is negative at all points except where x2 is


zero at which dV dt is zero.
• Thus, with positive damping, the energy cannot
increase.
x2   Kx1 at the points where x2  0
• So, the system cannot stay in non-equilibrium state
for which x2  0 . Thus the energy cannot remain
constant except at the equilibrium point where it will
be zero.
Visual Analogy of V  x
• The constant V-loci are ellipses on the cup-
shaped surface. Also a trajectory with initial
condition  x10 , x20  is shown. It crosses the constant
V-curves and moves towards the lowest point of
the cup (which is the equilibrium point).

N.B: In this example, it was easy to associate the


energy function V with the system. But, in general,
there is no obvious way of associating an energy
function with a given set of equations describing a
system.
Theorem 1:

Consider the system x = f  x , f  0  0

Suppose there exists a scalar function V  x  which, for


some real number   0 ,satisfies the following
properties for all x in the region x   .
(a) V  x   0; x  0 and V  0   0
V  x  is a positive definite scalar function.
(b) V  x  has continuous partial derivatives with
respect to all components of x .
dV
(c)  0(negative semi-definite scalar function).
dt
Then the system is stable near the origin.
Theorem 2:
If all the conditions of Theorem 1 hold and the property
(c) of Theorem 1 is replaced as:
dV
 0, x  0 (negative definite scalar function),
dt
then the system is asymptotically stable.

Theorem 3:
If all the conditions of Theorem 2 hold and, in addition,
V  x    as x   ,

then the system is asymptotically stable in-the-large


at the origin.
Lyapunov Functions:

Determination of stability by Lyapunov’s method


requires the choice of a positive definite function V  x 
called the Lyapunov function.
There is no universal method for selecting a Lyapunov
function for a specific problem. Some Lyapunov
functions may provide better answer than others.

If a Lyapunov function of the required type cannot be


found, it doesn’t imply that the system is unstable. It
only means that the attempt for establishing the
stability of the system has failed.
• For a given V  x  , there is no general method which
will easily ascertain whether it is positive definite. But,
if V  x  is a quadratic form, one can use Sylvester’s
theorem to ascertain definiteness of the function.
Example: Nonlinear system: x1   x1  2 x12 x2
x2   x2
V may be chosen as: V  x 2  x 2
1 2
dV V V
  1   1 2 
2 2
  x1  x2 2 x 1 2 x x 2 x
dt x1 x2 2

Although it is not possible to make a general statement


regarding global stability in this case, it is clear that dV dt
is clearly negative definite if 1  2 x1 x2  0
• This defines a region of stability in state space,
bounded by all points for which x1 x2  0.5
• In the 2nd and 4th quadrants, the inequality is
satisfied for all points. The origin of the system is
asymptotically stable.
Lyapunov’s Method for Linear Systems:

x  Ax  Autonomous system
This linear system is asymptotically stable in-the-large
at the origin if and only if, given any symmetric, positive
definite matrix Q , there exists a symmetric positive
definite matrix P which is the unique solution of
T
A P+PA= - Q
Proof: (Sufficiency Test)
Assume that a symmetric positive definite matrix P
exists which is the unique solution of
A T P+PA= - Q
Consider the scalar function: V  x   xTPx
V  x   0 for x  0 and V  0   0
T
 V  x   x Px+x T P x
 xTATPx+xTPAx
 x  A P  PA  x  -x Qx
T T T

Since Q is positive definite, V  x  is negative definite.


1

Norm of x may be defined as: x   x T Px  2

V  x  x V  x    as
2
Then and x 

The system is therefore asymptotically stable in-the-


large at the origin.
• In order to show that this is necessary, suppose that
the system is asymptotically stable and P is negative
definite.

Consider the scalar function, V  x   xTPx

 T 
 V  x     x Px  x Px   x Qx > 0
T T

 
There is a contradiction since this V  x  indicates
instability.
Hence the conditions for positive definiteness of P are
necessary and sufficient for asymptotic stability of the
linear autonomous system.
Example:
x  Ax  1 2 
A 
 1 4 
Let Q=I
 ATP+PA  -I
 1 1   p11 p12   p11 p12   1 2  1 0 
 p    
 2 4   12 p22   p12  
p22   1 4  0 1

2 p11  p12  1
2 p11  5 p12  p22  0
4 p12  8 p22  1
Solving these equations,
 23 7
 
 p11 p12   60 60
P   
 p12 p22   7

11 
 60 60 
Using Sylvester’s criterion, it can be ascertained that P
is positive definite.

Thus, the origin of this system is asymptotically


stable in-the-large.
Construction of Lyapunov Function for
Nonlinear Systems:

Krasovskii’s Method:

System: x  f  x  ; f  0  0
T
Lyapunov Function: V = f Pf
where P is a symmetric positive definite matrix.
T
Now, T
V = f Pf+f P f
f x
f= =Jf
x t
 f1 f1 f1 
 x x2 xn 
 1 
 f 2 f 2 f 2  is the Jacobian matrix.
J   x1 x2 xn 
 
 
 f n f n f n 
 x x2 xn 
 1

V  f J Pf  f PJf  f  J P  PJ  f
T T T T T
Hence
Let Q=J P  PJ
T

Since V is positive definite, for the system to be


asymptotically stable Q should be negative definite.
If, in addition V  x    as x  ,
then the system is asymptotically stable in-the-large.
Example:

u  g  e  , r  0, c  e, ku   e e
Let x1  e and x2  x1

So, x1  x2
x2   x2  Kg ( x1 )
The equilibrium point lies at the origin if g (0)  0
 0 1
   p11 p12 
J dg ( x1 ) Let P   
 K 1
 dx1   p12 p22 

For P to be positive definite,


p11  0 and p11 p22  p12  0
2

Now Q=J P  PJ
T

 dg ( x1 )   0 1
 0 K   p11 p12   p11 p12   
 dx1  
  p12 p22   p12 p22    K
dg ( x )
 1
1
1 1   dx1 
 dg ( x1 ) dg ( x1 ) 
 2 p12 K p11  p12  p22 K
dx1 dx1 
Q 
 dg ( x1 ) 
 p11  p12  p22 K dx 2( p12  p22 ) 
 1 
For the system to be asymptotically stable, Q should
be positive definite.
dg ( x1 )
Thus, 2 p12 K  0 and
dx1
2
dg ( x1 )  dg ( x1 ) 
4 p12 K ( p12  p22 )   p11  p12  p22 K  0
dx1  dx1 
2
dg ( x1 )  dg ( x1 ) 
 4 p12 K ( p22  p12 )   p11  p12  p22 K 
dx1  dx1 
Assumption: K > 0 and choose p12 > 0
dg ( x1 )
• Thus, the first condition gives 0
dx1
Now choose p11  p12 and p22   p12 ,    1

Then the second condition gives:


dg ( x1 )
4    1  K 2

dx1
 Now, these are the two conditions for which the
system is asymptotically stable.
For example, if g ( x1 )  x
3
1 (a nonlinearity lying
symmetrically in 1st and 3rd quadrants),

dg ( x1 )
 3x1  0 (always)
2

dx1
According to the second condition,

4 1 1 
x 
2
  2
3K    
1

Thus the largest value of x1 occurs when  2 .


1 1 1
Thus, x    x1 
2
1
3K 3K 3K

You might also like