0% found this document useful (0 votes)
21 views26 pages

Non-Linear System Analysis

Nonlinear control system design is increasingly important due to the limitations of linear techniques in handling large operational ranges and uncertainties. Various methods such as phase-plane analysis, Lyapunov theory, and describing functions have been developed to analyze nonlinear systems, each with its own advantages and challenges. Understanding these nonlinear behaviors and characteristics is crucial for control engineers to effectively address practical control problems.

Uploaded by

240217007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views26 pages

Non-Linear System Analysis

Nonlinear control system design is increasingly important due to the limitations of linear techniques in handling large operational ranges and uncertainties. Various methods such as phase-plane analysis, Lyapunov theory, and describing functions have been developed to analyze nonlinear systems, each with its own advantages and challenges. Understanding these nonlinear behaviors and characteristics is crucial for control engineers to effectively address practical control problems.

Uploaded by

240217007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Non-Linear system analysis

Nonlinear control system design has been dominated by linear


control techniques, which rely on the key assumption of a small
range of operation for the linear model to be valid. This tradition
has produced many reliable and effective control systems.
However, the demand for nonlinear control methodologies has
recently been increasing for several reasons.

First, modern technology, such as applied in high-performance


aircraft and high-speed high-accuracy robots, demands control
systems with much more stringent design specifications, which are
able to handle nonlinearities of the controlled systems more
accurately. When the required operation range is large, a linear
controller is likely to perform very poorly or to be unstable,
because the nonlinearities in the system cannot be properly
compensated for. Nonlinear controllers, on the other hand, may
directly handle the nonlinearities in large range operation. Also, in
control systems there are many nonlinearities whose discontinuous
nature does not allow linear approximation.

Second, controlled systems must be able to reject disturbances


and uncertainties confronted in real-world applications. In
designing linear controllers, it is usually necessary to assume that
the parameters of the system model are reasonably well known.
However, many control problems involve uncertainties in the
model parameters. This may be due to a slow time variation of the
parameters (e.g., of ambient ain pressure during an aircraft flight),
or to an abrupt change in parameters (e.g., in the inertial
parameters of a robot when a new object is grasped). A linear
controller, based on inaccurate values of the mode parameters,
may exhibit significant performance degradation or even
instability. Nonlinearities can be intentionally introduced into the
controller part of a control system, so that model uncertainties can
be tolerated.

Third, advances in computer technology have made the


implementation of nonlinear controllers relatively simple task. The
challenge for control design is to fully utilize this technology to
achieve the best control system performance possible.

Thus, the subject of nonlinear control is an important area of


automatic control. Learning basic technique of nonlinear control
analysis and design can significantly enhance the ability of control
engineers to deal with practical control problems, effectively. It
also provides a sharper understanding of the real world which is
inherently nonlinear.

NONLINEAR SYSTEMS ANALYSIS

No universal technique works for the analysis of all nonlinear


control systems , for linear control, one can analyze the system in
the time domain or in the frequency domain. However of nonlinear
control systems none of these standard approaches can be used,
since direct solutions of nonlinear differential equation are
generally difficult, and frequency-domain transformations do not
apply.

While the analysis of nonlinear control systems is difficult, serious


effort have been made to develop theoretical tools for it. Many
methods of nonlinear control sytemand their details been
proposed.

Let us briefly describe some of these methods before discussing


their details in the following chapters.

Phase-Plane Analysis:

Phase-plane analysis, discussed in Chapter 9, is a method of


studying second-order nonlinear systems. Its basic idea is to solve
a second-order differential equation and graphically display the
result as a family of system motion trajectories on a two-
dimensional plane, called the phase plane, which allows us to
visually observe the motion patterns of the system. While phase-
plane analysis has a number of important advantages, it has the
fundamental disadvantage of being applicable only to systems
which can be well approximated by second-order dynamics.
Because of its graphical nature, it is frequently used to provide
intuitive insights about nonlinear effects.
Lyapunov Theory
In using the Lyapunov theory to analyze the stability of a nonlinear
system, the idea is to a scalar energy-like function (a Lyapunov
function) for the system, and to see whether it decreases The
power of this method comes from its generality; it is applicable to
all kinds of control systems. Conversely, the limitation of the
method lies in the fact that it is often difficult to find a Lyapunov
function for a given system.

Although Lyapunov's method is originally a method of stability


analysis, it can be used for synthesis problems. One important
application is the design of nonlinear controllers. The idea is to
somehow formulate a scalar positive definite function of the
system states, and then choose a control law to make this function
decrease. A nonlinear control system thus designed, will be
guaranteed to be stable. Such a design approach has been used to
solve many complex design problems, e.g., in adaptive control and
int sliding mode control (discussed in Chapter 10).

Describing Functions
The describing function method, discussed in Chapter 9, is an
approximate technique for studying nonlinear systems. The basic
idea of the method is to approximate the nonlinear components in
nonlinear control systems by linear "equivalents", and then use
frequency-domain techniques to analyze the resulting systems.
Unlike the phase-plane method, it is not restricted to second-order
systems. Rather, the accuracy of describing function analysis
improves with an increase in the order of the system. Unlike the
Lyapunov method, whose applicability to a specific system hinges
on the success of a trial-and- error search for a Lyapunov function,
its application is straightforward for a specific class of nonlinear
systems.

INTRODUCTION:
Because nonlinear systems can have much richer and complex
behaviors than linear systems, their analysis is much more difficult.
Mathematically, this is reflected in two aspects. Firstly, nonlinear
equations, unlike linear ones, cannot, in general, be solved
analytically, and therefore, a complete understanding of the
behavior of a nonlinear system is very difficult. Secondly, powerful
mathematical tools like Laplace and Fourier transforms do not
apply to nonlinear systems. As a result, there are no systematic
tools for predicting the behavior of nonlinear systems. Instead,
there is a rich inventory of powerful analysis tools, each best
applicable to a particular class of nonlinear control problems [125-
129].

SOME COMMON NONLINEAR SYSTEM BEHAVIORS:


As a minimum, it is important to be aware of the main
characteristics of nonlinear behavior, only to permit recognition if
these are encountered experimentally or in system simulations.

The previous chapters have been predominantly concerned with


the study of linear time-invariant control systems. We have
observed that these systems have quite simple properties, such as
the following

• a linear system x°=Ax, with x being the vector of states and A


being the system matrix, has a unique equilibrium point (if A is
nonsingular, normally true for feedback system matrices).

• the equilibrium point is stable if all eigenvalues of A have


negative real parts, regardless of initial conditions;

•the transient response is composed of the natural modes of the


system, and the general solution can be solved analytically; and
•in the presence of the external input u(t), the system response has
a number of interesting properties: (i) it satisfies the principle of
superposition, (ii) the asymptotic stability of the system implies
bounded-input, bounded-output stability, and (iii) a sinusoidal
input leads to a sinusoidal output of the same frequency

The behavior of nonlinear systems, however, is much more


complex. Due to the lack of superposition property, nonlinear
systems respond to external inputs and initial conditions quite
differently from linear systems. Some common nonlinear system
properties are as follows [126]

(i) Nonlinear systems frequently have more than one equilibrium


point. For a linear system, stability is seen by noting that for any
initial condition, the motion of a stable system always converges to
the equilibrium point. However, a nonlinear system may converge
to an equilibrium point starting with one set of the initial
conditions, and may go to infinity starting with another set of
initial conditions. This means that the stability of nonlinear systems
may depend on initial conditions. In the presence of a bounded
external input, unlike linear systems, stability of nonlinear systems
may also be dependent on the input value.

(ii) Nonlinear systems can display oscillations of fixed amplitude


and fixed period without external excitation. These oscillations are
called limit cycles.
Consider the well-known Van der Pol's differential equation
My°°+B(y^2-1)y°+Ky=0; M>0, B>0, K>0
which describes physical situations in many nonlinear systems. It
can be regarded as describing a mass-spring-damper system with
a position-dependent damping coefficient B(y^2 - 1). For large
values of y, the damping coefficient is positive and the damper
removes. energy from the system. This implies that the system
motion has a convergent tendency. However, for small values of y.
the damping coefficient is negative and the damper adds energy
into the system. This suggests that the system motion has a
divergent tendency. Therefore, because the nonlinear damping
varies with y, the system motion can neither grow unboundedly
nor decay to zero. Instead, it displays a sustained oscillation
independent of initial conditions, as illustrated in Fig. 9.1. Of
course, sustained oscillations can also be found in linear systems,
e.g., in the case of marginally stable linear systems. However, the
oscillation of a marginally stable linear system has its amplitude
determined by its initial conditions, and such a system is very
sensitive to changes in system parameters (a slight change in
parameters is capable of leading either to stable convergence or to
instability). In nonlinear systems, on the other hand, the amplitude
of sustained oscillations is independent of the initial conditions,
and limit cycles are not easily affected by parameter changes

(iii) A nonlinear system with a periodic input may exhibit a periodic


output whose frequency is either a subharmonic or a harmonic of
the input frequency. For example, an input of frequency of 10 Hz
may result in an output of 5 Hz for the subharmonic case or 30 Hz
for the harmonic case.

(iv) A nonlinear system can display jump resonance, a form of


hysteresis, in its frequency response
Consider a mass-spring-damper system

My°°+By°+K1y+K₂y³ = F coswt
M>0, B > 0, K₁ > 0, K₂ > 0
Note that the restoring force of the spring is assumed to be
nonlinear. If in an experiment, the frequency w is varied and the
input amplitude F is held constant, frequency-response curve of
the form shown in Fig. 9.2, may be obtained. As the frequency w is
increased, the response y follows the curve through the points A, B
and C. At point C, a small change in frequency results in a
discontinuous jump to point D. The response then follows the
curve to point E upon further increase in frequency. As the
frequency is decreased from point E, the response follows the
curve through points D and F. At point F, a small change in
frequency results in a discontinuous jump to point B. The response
follows the curve to point A for further decrease in frequency.
Observe from this description that the response never actually
follows the segment CF. This portion of the curve represents a
condition of unstable equilibrium.
COMMON NONLINEARITIES IN CONTROL SYSTEMS:
In this section, we take a closer look at the nonlinearities found in
control systems. Consider the typical block diagram of closed-loop
system shown in Fig. 9.3. It is composed of four parts: a plant to be
controlled, sensors for measurements, actuators for control action,
and a control law usually implemented on a computer.
Nonlinearities may occur in any part of the system.
We may classify the nonlinearities as inherent and intentional.
Inherent nonlinearities naturally come with the system's hardware
(saturation, deadzone, backlash, Coulomb friction). Usually such
nonlinearities have undesirable effects, and control systems have
to properly compensate for them. Intentional nonlinearities, on the
other hand, are artificially introduced by the designer. Nonlinear
control laws, such as bang-bang optimal control laws and adaptive
control laws are typical examples of intentional nonlinearities.

Saturation:
Saturation is probably the most commonly encountered
nonlinearity in control systems. It is often associated with
amplifiers and actuators. In transistor amplifiers, the output varies
linearly with the input, only for small amplitude limits. When the
input amplitude gets out of the linear range of the amplifier, the
output changes very little and stays close to its maximum value.
Most actuators display, saturation characteristics. For example, the
output torque of a servo motor cannot increase infinitely, and
tends to saturate due to the properties of the magnetic material
similarly valve-controlled hydraulic actuators are saturated by the
maximum flow rate.

Deadzone:
A deadzone nonlinearity may occur in sensors, amplifiers and
actuators. In a dc motor, we assume that any voltage applied to
the armature windings will cause the armature to rotate if the field
current is maintained constant. In reality, due to static friction at
the motor shaft, rotation will occur only if the torque provided by
the motor is sufficiently large. This corresponds to a so-called
deadzone for small voltage signals. Similar deadzone phenomena
occur in valve-controlled pneumatic and hydraulic actuators.

Backlash:
A backlash nonlinearity commonly occurs in mechanical
components of control systems. In gear trains, small gaps exist
between a pair of mating gears . As a result, when the driving gear
rotates a smaller angle less than the gap H, the driven gear does
not move at all, which corresponds to the deadzone after contact
has been established between the two gears

Coulomb Friction:
In any system where there is a relative motion between contacting
surfaces, there are several types of friction: all of them nonlinear
except the viscous components. Coulomb friction is, in essence, a
drag (reaction) force which opposes motion, but is essentially
constant in magnitude, regardless of velocity The common
example is an electric motor, in which we find Coulomb friction
drag due to the rubbing contact between the brushes and the
commutator.
On-Off Nonlinearity:
In this book we have primarily covered the following three modes
of control:
(i) proportional control;
(ii) integral control; and
(iii) derivative control.
Another important mode of feedback control is the on-off control.
This class of controllers have only two fixed states rather than a
continuous output. In its wider application, the states of an on-off
controller may not, however, be simply on and off but could
represent any two values of a control variable. Oscillatory behavior
is a typical response characteristic of a system under two-position
control, also called bang- bang control. The oscillatory behavior
may be avoided using a three-position control (on-off controller
with a deadzone).

Describing Function Analysis:


For the so-called separable systems, which comprise a linear part
defined by its transfer function, and a nonlinear part defined by a
time-independent relationship between its input and output
variables, the describing function method is most practically useful
for analysis. It is an approximate method but experience with real
systems and computer simulation results shows adequate accuracy
in many cases Basically, the method is an approximate extension of
frequency response methods (including Nyquist stability criterion)
to nonlinear systems
In terms of mathematical properties, nonlinearities may be
categorized as continuous and discontinuous Because
discontinuous nonlinearities cannot be locally approximated by
linear functions, they are also called "hard" nonlinearities. Hard
nonlinearities (such as saturation, backlash, or coulomb friction)
are commonly found in control systems, both in small range and
large range operations. Whether a system in small range operation
should be regarded as nonlinear or linear depends on the
magnitude of the hard nonlinearities and on the extent of their
effects on the system performance.

The continuous or so-called "soft" nonlinearities are present in


every control system, though not visible because these are not
separable) Throughout the book, we have neglected these
nonlinearities in our derivations of transfer function and state
variable models. For example, we have assumed linear restoring
force of a spring, a constant damping coefficient independent of
the position of the mass, etc. In practice, none of these
assumptions is true for a large range operation. Also, there are
situations, not covered in this book, wherein the linearity
assumption gives too small a range of operation to be useful;
linear design methods cannot be applied for such systems.

Describing function analysis is applicable to separable hard


nonlinearities. For this category of nonlinear systems, as we shall
see later in this chapter, the predictions of describing function
analysis usually are a good approximation to actual behavior when
the linear part of the system provides a sufficiently strong filtering
effect. Filtering characteristics of the linear part of a system
improve as the order of the system goes up. The low pass filtering'
requirement is never completely satisfied; for this reason, the
describing function method is mainly used for stability analysis and
is not directly applied to the optimization of system design.

DESCRIBING FUNCTION FUNDAMENTALS:


Of all the analytical methods developed over the years for
nonlinear systems, the describing function method is generally
agreed upon as being the most practically useful. It is an
approximate method, but experience with real systems and
computer simulation results, shows adequate accuracy in many
cases. The method predicts whether limit cycle oscillations will
exist or not, and gives numerical estimates of oscillation frequency
and amplitude when limit cycles are predicted. Basically, the
method is an approximate extension of frequency-response
methods (including Nyquist stability criterion) to nonlinear
systems.
To discuss the basic concept underlying the describing function
analysis, let us consider the block diagram of a nonlinear system
shown in Fig. 9.5, where the blocks G₁(s) and G₂(s) represent the
linear elements, while the block N represents the nonlinear
element.
The describing function method provides a “linear
approximation" to the nonlinear element based on the assumption
that the input to the nonlinear element is a sinusoid of known,
constant amplitude The fundamental harmonic of the element's
output is compared with the input sinusoid, to determine the
steady-state amplitude and phase relation. This relation is the
describing function for the nonlinear element. The method can,
thus, be viewed as 'harmonic linearization' of a nonlinear element.

Describing Function for the Nonlinear Element :


Let us assume that input x to the nonlinearity in Fig. 9.5 is
sinusoidal, i.e.,
x=X sinwt
With such an input, the output y of the nonlinear element will, in
general, be a non-sinusoidal periodic function which may be
expressed in terms of Fourier series as follows
y=Yo + A1coswt + B₁sin wt + A2cos2wt + B2sin2wt + ...
The nonlinear characteristics listed in the previous section, are all
odd-symmetrical/odd half-wave symmetrical; the mean value Yo
for all such cases is zero and therefore, the output
y= A cosewt + B₁sinwt + A2cos2wt + B2sin2wt +
In the absence of an external input (i.e., r = 0 in Fig. 9.5), the
output y of the nonlinear element N is fed back to its input,
through the linear elements G₂(s) and (G1)(s) in tandem. If
G₂(s)G₁(s) has low-pass characteristics (this is usually the case in
control systems), it can be assumed, to a good degree of
approximation, that all the higher harmonics of y are filtered out in
the process, and the input x to the nonlinear element N is mainly
contributed by the fundamental component (first harmonic) of y,
i.e., x remains sinusoidal. Under such conditions, the second and
higher harmonics of y can be thrown away for the purpose of
analysis, and the fundamental component of y, i.e.,
y1=A1coswt+B1sinwt
need only be considered.

The above procedure heuristically linearizes the nonlinearity since,


for a sinusoidal input, only a sinusoidal output of the same
frequency is now assumed to be produced. This type of
linearization, called the first-harmonic approximation, is valid for
large signals as well, so long as the filtering condition is satisfied.
It is important to remind ourselves here that the simplicity in
analysis of nonlinear systems using describing functions, has been
achieved at the cost of certain limitations; the foremost being the
assumption that in traversing the path through the linear parts of
the system from nonlinearity output back to nonlinearity input, the
higher harmonics will have been effectively low-pass filtered,
relative to the first harmonic. When the linear part of the system
does indeed provide a sufficiently strong filtering effect, then the
predictions of describing function analysis, usually, are a good
approximation to actual behavior. Filtering characteristics of the
linear part of the system improve as the order of the system goes
up.
The 'low-pass filtering' requirement is never completely satisfied;
for this reason, the describing function method is mainly used for
stability analysis and is not directly applied to the optimization of
system design. Usually, the describing function analysis will
correctly predict the existence and characteristics of limit cycles.
However, false indications cannot be ruled out; therefore, the
results must be verified by simulation

Phase-Plane Analysis:
Another practically useful method for nonlinear system analysis is
the phase-plane method. While phase- plane analysis does not
suffer from any approximations and hence can be used for stability
analysis as well as optimization of system design, its main
limitation is that it is applicable to systems which can be well
approximated by second-order dynamics. Its basic idea is to solve
second-order differential equation and graphically display the
result as a family of system motion trajectories on a two-
dimensional plane, called the phase plane, which allows us to
visually observe the motion patterns of the system. The method is
equally applicable to both hard and soft nonlinearities.

Singular points:
A system represented by an equation
x° = Ax of this form is called an autonomous system. For such a
system, consider the points in the phase-space at which the
derivatives of all the state variables are zero. Such points are called
singular points. These are in fact equilibrium points already defined
in Chapter 12. If the system is placed at such a point, it will
continue to lie there if left undisturbed (the derivatives of all the
phase variables being zero, the system state remains unchanged).
For studying the system dynamic response at an equilibrium
(singular) point to small perturbation, the system is linearized
(using linearization techniques presented in Chapter 12) at that
point. The linearized model of system of eqn. (15.8) may be written
as x° = Ax

Lyapunov Stability Analysis:


The most fundamental analysis tool is the concept of a Lyapunov
function and its use in nonlinear stability analysis. The power of the
method comes from its generality. It is applicable to all kinds of
control systems; systems with hard or soft nonlinearities, and of
second-order or higher-order. The limitation of the method lies in
the fact that it is often difficult to find a Lyapunov function for a
given system.
Lyapunov's original work, first published in 1892, included two
methods for stability analysis, the so- called Lyapunov's first
method (linearization method) and Lyapunov's second method
(direct method). The linearization method draws conclusions about
a nonlinear system's local stability around an equilibrium point
from the stability properties of its linear approximation. The direct
method is not restricted to local motion and determines the
stability of a nonlinear system (directly without linearization) by
constructing a scalar function for a system and examining the
function's time variation.

Lyapunov's second method (Direct method):


The Lyapunov stability analysis is based upon the concept of
energy, and the relation of stored energy with system stability.
in general, there is no obvious way of associating an energy
function with a given set of equations describing a system. In fact,
there is nothing sacred or unique about the total energy of the
system which allows us to determine system stability in the way
described above. Other scalar functions of the system state can
also answer the question of stability. This idea was introduced and
formalized by the mathematician A.M. Lyapunov. The scalar
function is now known as the Lyapunov function and the method
of investigating stability using Lyapunov's function is known as the
Lyapunov's direct method.
Positive Definiteness of Scalar Functions :A scalar function V(x) is
said to be positive definite in the region ||x|| ≤K (which includes the
origin of the state space), if (x) > 0 at all points of the region
except at the origin, where it is zero.

Negative Definiteness of Scalar Functions: A scalar function V(x) is


said to be negative definite if [-V(x)] is positive definite.

Positive Semidefiniteness of Scalar Functions:


A scalar function (x) is said to be positive semidefinite in the region
|| x || <K, if its value is positive at all points of the region except at
finite number of points, including origin, where it is zero.

Negative Semidefiniteness of Scalar: Functions A scalar function


V(x) is said to be negative semidefinite if [ -V(x)] is positive
semidefinite.

Indefiniteness of Scalar Functions: A scalar function (x) is said to be


indefinite in the region ||x||<K, if it assumes both positive and
negative values, within this region
Theorem 9.1:
For the autonomous system (9.59), sufficient conditions of stability
are as follows:
Suppose that there exists a scalar function V(x) which, for some
real number €> 0, satisfies the following properties for all x in the
region||x|| ≤ €
{
(1) V(x) > 0; x ≠ 0
(ii) V(0) = 0
}
(i.e., V(x) is positive definite function)

(iii) V(x) has continuous partial derivatives with respect to all


components of x. Then the equilibrium state x^e = 0 of the system
is
asymptotically stable
(iva)if (x) < 0, x ≠ 0, i.e., V(x) is a negative definite function; and

(ivb) asymptotically stable in-the-large if V(x) < 0, x ≠ 0, and in


addition V(x) → ∞
as ||x||→∞.

Theorem 9.2:
For the autonomous system (9.59) , sufficient conditions of
stability are as follows.
Suppose that there exists a scalar function V(x) which, for some
real number €> 0, satisfies the following properties for all x in the
region ||x|| ≤ €
{
(I)V(x) > 0; x ≠0
(ii) V(0) = 0
}
ie, (x) is positive definite function)

(iii) V(x) has continuous partial derivatives with respect to all


components of x. Then the equilibrium state x^e = 0 of the system
(9.59) is

(iva) asymptotically stable if (x) < 0, x ≠ 0, i.e., V(x) is a negative


definite function; or if V(x) ≤0 (i.e., V(x) is negative semidefinite)
and no trajectory can stay forever at the points or on the line other
than the origin, at which V(x) = 0;

(ivb) asymptotically stable in-the-large if conditions (iva) are


satisfied, and in addition V(x) →∞; as ||x||→∞; and

(ivc) stable in the sense of Lyapunov if V(x) is identically zero along


a trajectory.

Instability:
It may be noted that instability in a nonlinear system can be
established by direct recourse to the instability theorem of the
direct method. The basic instability theorem is presented below
Theorem 9.3:
For the autonomous system (9.59), sufficient conditions for
instability are as follows.
Suppose that there exists a scalar function W(x) which, for some
real number €> 0, satisfies the following properties for all x in the
region ||x|| ≤ ε:
(i) W(x)>0; x ≠ 0;
(ii) W(0) = 0; and
(iii) W(x) has continuous partial derivatives with respect to all
components of x.
Then the equilibrium state x^e = 0 of the system (9.59) is unstable
if W(x)>0, x ≠ 0, i.e., W(x) is a positive definite function.

Lyapunov's first method (linearization method):


the key results of Lyapunov's first method/linearization method are
presented.

(1) If the linearized system x° = Ax is strictly stable (i.e., if all the


eigenvalues of A are strictly in the left half of the complex plane),
then the equilibrium point of the actual nonlinear system x° = f(x)
is locally asymptotically stable.

(ii) If the linearized system is unstable (i.e., if at least one


eigenvalue of A is strictly in the right half of the complex plane),
then the equilibrium point is locally unstable for the nonlinear
system.
(iii) If the linearized system has all eigenvalues of A in the left half
of the complex plane, but also has at least one eigenvalue having
zero real part, then one cannot conclude anything from the linear
approximation (the equilibrium point may be stable in the sense of
Lyapunov, locally asymptotically stable or locally unstable for the
nonlinear system.

You might also like