0% found this document useful (0 votes)
11 views67 pages

Backstepping Main

Uploaded by

Joshua garagara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views67 pages

Backstepping Main

Uploaded by

Joshua garagara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

UNIVERSITY OF NIGERIA, NSUKKA

FACULTY OF PHYSICAL SCIENCES

DEPARTMENT OF MATHEMATICS

GLOBAL ASYMPTOTIC STABILITY OF


NONLINEAR STRICT FEED-BACK
SYSTEM VIA BACK-STEPPING CONTROL
APPROACH

A PROJECT SUBMITTED TO THE DEPARTMENT OF MATHEMATICS,

FACULTY OF PHYSICAL SCIENCES, UNIVERSITY OF NIGERIA, NSUKKA IN

PARTIAL FULFILLMENT OF

THE REQUIREMENT FOR THE AWARD OF

MASTER OF SCIENCE (M.Sc.) DEGREE

IN MATHEMATICS

BY

GARAGARA, SUKA KINWI

REG NO.: PG/MSC/19/89581

SUPERVISOR : DR. S.E. Aniaku

JANUARY, 2023
TITLE PAGE

GLOBAL ASYMPTOTIC STABILITY OF


NONLINEAR STRICT FEED-BACK
SYSTEM VIA BACK-STEPPING CONTROL
APPROACH

ii
DECLARATION PAGE

I hereby declare that I am the author of this thesis, that it is original to me, unless otherwise specified
in the text, and that it has not previously been considered for publication or submission for any other
academic or professional credential.

OR

I affirm that I wrote this thesis and that it was not used to obtain any other degree or professional
certification. I certify that the work I’ve provided is original to the extent that it hasn’t been combined
with other people’s work for publishing. The following explicitly acknowledges both my contribution to
this work as well as those of the other authors. I certify that where references to the work of others have
been made in this thesis, proper credit has been given.
OR

Under the direction of Dr. S.E. Aniaku, the work detailed in this dissertation was completed at the
University of Nigeria,Nsukka between January 2019 and February 2023. These studies are the author’s
original work, and they haven’t previously been submitted to a university in any way for a degree or
diploma. When someone else’s work has been used, it is properly acknowledged in the text.

Garagara, Suka Kinwi

iii
APPROVAL PAGE

This project has been read and approved as having satisfied the requirement of the Department of Math-
ematics, Faculty of Physical Sciences, University of Nigeria, Nsukka, for the award of Master of Science
(M.Sc.) degree in Mathematics

Project Supervisor Head of Department


Dr. S.E. Aniaku Prof. F.O. Isiogugu

External Examiner

iv
DEDICATION

All to the glory of God who does not have the ability to fail as time
tends to infinity.

v
ACKNOWLEDGMENT

I want to express my gratitude to a few people in particular.


Firstly, I want to start by thanking Dr. S.E. Aniaku, my supervisor, for his understanding, encouraging
words and guidance in helping me get through many challenges.
Secondly, I would like to express my profound gratitude to Prof. Mbah, Prof. Ochor, Dr. B.G. Akuchu,
Prof. Mrs. F.O. Isiogugu, Dr. O.C. Collins, Dr. D.F. Agbebeku, Dr. Mrs. U.A. Ezeafulukwe and Dr.
Mrs. Okofu. They taught me professional behavior in addition to imparting academic knowledge.
Furthermore, I want to express my gratitude to my spiritual fathers, Pastor Isaiah Oluwayemi and Pastor
Moses Melefa, for all their prayers and fatherly counsel. Words will fail me to start listing all the names
of the people of R.C.C.G. King’s Palace Parish (K.P.P.) for their hospitality towards me.
Additionally, I want to thank my friends Daniel, Ekemini M., Amos D., Wali O., Ugonma A., and Prisca
and Dorathy. I will never forget how we established a friendship that has lasted this long.
Finally, I want to express my heartfelt gratitude to my lovable parents and relatives for their kind love
and support throughout my period of study.

vi
ABSTRACT

This research presents a control design methodology called back-stepping control that is applied to a
strict feedback nonlinear system. Back-stepping control is used based on its ability to handle the afore-
mentioned system. When back-stepping, the time-derivative of the control law created in the previous
step is expressed analytically. Backstepping begins with the system equation (integrator) that is located
furthest away from the control input and ends with the control input. The Lyapunov theorem is used to
demonstrate that with the use of the back-stepping controller, global asymptotic stability can be obtained.
The so-called ”first technique” only works in a few specific instances and assumes that an explicit. Let
the answer be known. Contrarily, the ”second technique” is very powerful and general, and it can an-
swer issues about stability for specific classes of differential equations without knowing their explicit
solutions. Our designed control approach achieves globally asymptotically stable behavior by making
some appropriate assumptions. To demonstrate the efficiency of the suggested control mechanism, a
simulation was carried out using MATLAB, and the positive results were reported.

vii
CONTENTS

TITLE PAGE ii

APPROVAL PAGE iii

APPROVAL PAGE iv

DEDICATION v

ACKNOWLEDGMENT vi

ABSTRACT vii

1 Introduction 1
1.1 Background of study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Criteria for a good control system . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Some examples of Control Systems . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 The configuration of a control system . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 Closed loop (Feedback Control) system . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Types of feedback control system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Linear and Nonlinear Control Systems . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Types of system based on stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Limit cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Adaptive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 Definition of terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Objective of the study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.8 Scope of study: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2 LITERATURE REVIEW 20
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3 METHODOLOGY 25
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Lyapunov stability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Lyapunov’s Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

viii
3.4 Lyapunov Based Control Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Back-stepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4 Main Result 35
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Application of back-stepping control design to strict feedback systems . . . . . . . . . . 35
4.3 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5 Conclusion 54

ix
CHAPTER 1

INTRODUCTION

1.1 Background of study

The term ”system” has several connotations in contemporary usage. In order to start, let us define what
we mean by this term as it relates to this research, first in an abstract sense and then a little more precisely
in terms of scientific writing.
Any arrangement, set, or group of items related or connected in such a way as to constitute a whole is
referred to as a system.
The term ”control” is typically understood to mean ”direct,” ”regulate,” or ”command.” When we com-
bine the definitions above, we get
Definition 1.0: Any arrangement of physical parts that are connected or interconnected in such a way as
to direct, command, or govern another system. This is referred to as a control system.
It is conceivable to think of every physical thing as a control system, at least in a more abstract sense.
Everything affects its surroundings in some way, if it is not directly, then indirectly, similarly, a mirror
would reflect a light beam at a various angle.

Figure 1.1: Figure 1.2:

The mirror (Fig. 1.1) can be thought of as a basic control system that directs the light beam using the

1
straightforward formula ”the angle of reflection α equals the angle of incidence α.”
In science and engineering, the term ”control system” is typically used to refer to systems whose
primary purpose is to actively or dynamically direct, regulate, or command. The system is seen in Fig.
(1.2), which consists of a mirror that can be adjusted upward and downward with a screw placed at a
single end and pivoted at the other, is appropriately referred to as a control system. The screw controls
the angle at which light is reflected.
There are many control systems in our surroundings. But first, we must define two terms, input and
output, these are useful for identifying, characterizing, or describing a control system.
The term ”input” refers to the excitation, stimulus, or directive given to any control system, usually
out of an external source of energy, typically in order to cause the control system to behave in a
certain way.
The actual response achieved by a control system is its output. It might or might not be the same as
the implied stated response of the input (Hsu, 1995). Physical or abstract variables, such as references,
set points, or intended values for the control system’s output, can all be considered as inputs. Multiple
inputs and outputs are possible for control systems. The desired response and the actual reaction are
represented, respectively, by the input and the output. In reaction to an input or stimulus, a control
system produces an output.
Actuators and controllers are found in every control system. As shown below.

Figure 1.3: A control system block diagram

The controller, which is often electronic, is the system’s brain. The set point, which is a signal de-
noting the preferred system input, is the input to the controller. The actuator is an electromechanical
component that transforms the controller’s signal into a physical action. A heating element, an electri-
cally operated valve, or an electric motor are examples of typical actuators.

2
Figure 1.3’s final block, labeled ”process”, has an output with the label ”controlled variable.” The con-
trolled variable is the measurable outcome of the physical process that the actuator is affecting, and the
process block represents that process. For instance, if an electric heating element in a furnace serves as
the actuator, the process is ”heating furnace,” and the controlled variable is the furnace’s temperature.
The technique is called ”rotation of the antenna” when an electric motor serves as the actuator and the
antenna’s angular position is the controlled variable (Kilian, 2001).
The defined answer inferred by the input may not be the same in the output. We can identify or specify
the structure of the system’s components if the input and output are provided. Essentially, there are three
different categories of control systems:

1. Man-made control systems

2. natural systems, such as biological ones

3. Control systems using both Man-made and natural components.

Electric switches are artificial control systems used to regulate power flows. A biological control system,
primarily comprised of the eyes, arms, hands, and fingers of a person, is needed to perform the seemingly
simple act of pointing with a finger. The input is a precise direction of the object in relation to a reference,
and indeed the output is the actual pointed path in relation to the same reference. There are components
of human-driven car control systems that are surely both biological and man-made. The driver’s goal is
to keep the car in the correct traffic lane. The driver does this by paying close attention to how the car is
moving in terms of the direction of the road. Robotics, space vehicle systems, autopilots and control for
aircraft, marine and ship control systems, automatic control systems for hydrofoils, and intercontinental
missile guidance systems, and high-speed rail systems, surface-effect ships, including magnetic levitation
systems, are all examples of applications for control systems (Dukkipati, 2006).

1.1.1 Criteria for a good control system

1. Accuracy: Accuracy refers to an instrument’s measurement tolerance and establishes the upper
and lower bounds of errors that can be made under typical working circumstances.

3
Using feedback elements helps increase accuracy. Any control system that wants to be more
accurate should have an error detector built in.

2. Sensitivity: The parameters of a control system are always changing alongside the change in sur-
rounding conditions, internal disturbance, or any other parameters. Sensitivity is a useful way of
expressing this change. Any control system ought to be insensitive to these factors and solely
responsive to input signals.

3. Noise: This is a term used to describe an unwanted input signal. To improve performance, an
effective control system ought to be able to lessen the noise effect.

4. Stability: The ability of the control system to maintain stability is crucial. Such a control system
is referred to as a stable system if the input signal is bounded and the output signal is bounded for
the bounded input signal, and if the input signal is zero and the output signal is zero.

5. Speed: This refers to how long it takes the control system to provide a stable output. High speed
characterizes a good control system. Such a mechanism has a very brief transient period.

1.1.2 Some examples of Control Systems

In science, industry, and household, control systems are used in a wide variety of regular and remarkable
applications. Some instances are listed thus:

1. Automatic hot water heater

2. A person adjusting the gas supply to the engine to control the speed of car

3. an automobile’s speed control

4. Thermostat-controlled household heating and cooling system

5. Roadway junctions with automatic traffic control (signaling)

4
1.1.3 The configuration of a control system

The open-loop control system and the closed-loop control system are the two available control system
configurations.

Open-loop and closed loop (Feedback Control) system

An open-loop system is a control system that has no feedback attached to it. These systems do not rely
on their output; that is, in the open-loop systems, the output isn’t employed as a control variable and it
doesn’t affect the input. Open-loop systems only allow for one direction of signal flow. These systems
are also referred to as non-feedback systems because they do not have any feedback, meaning the output
isn’t transmitted to the input (Franklin et al., 2002).

Figure 1.4: open loop control system

In an open-loop control system, the controller independently determines the precise voltage or cur-
rent the actuator will need to complete the task and then sends it. The controller, however, is unable
to determine whether the actuator performed as intended with this method, since there is no feedback
The controller’s understanding of the actuator’s operating characteristics is essential for this system to
function.

Open Loop System Applications

In several applications across our daily lives, open-loop controls are being used. The following are some
of the well-known systems built on the idea of open-loop control:

1. Bread Toaster: Another specified situation of the open-loop control is a toaster, which continues
to run according to a defined period whether or not the bread has finished toasting.

2. Electric Bulb : We are all aware that an electric bulb produces light when an electric current flows

5
through it. By turning on the bulb while the mains power is available, we can get it to function.
And neither the bulb’s temperature nor any other factors affect this procedure.

3. Washing Machine : Another everyday object that uses an open-loop control is the washing ma-
chine. The processes of soaking, rinsing, and drying are time-based and are unaffected by how
clean or dry the clothing are.

4. Electric Hand Drier : No matter how much our hands are dried, the hand dryer uses an electric
power source to dry our hands automatically through blowing warm air when we keep them facing
the dryer.

5. Traffic Control System : The majority of computerized traffic control use time-based open-loop
control, which means that each signal has a set time window during which it runs regardless of the
volume of traffic.

Advantages of Open Loop Control System

1. They are relatively easy to construct and very straightforward.

2 In general, they are up to a certain point stable.

Disadvantages of Open Loop control System

1. Open loop systems are unreliable and imprecise by nature.

2. Since they are non-feedback systems, there is no mechanism to automatically rectify them if its
output is impacted by certain external disturbances.

1.1.4 Closed loop (Feedback Control) system

A closed-loop control is a system with a feedback mechanism or a control system that employs a feed-
back to produce the output. It is also known as a feedback control system.
A feedback mechanism can be used to regulate the system’s stability. Therefore, it is possible to convert
an open-loop control system into a closed loop by supplying a feedback system.

6
Fig. (1.5) displays a basic closed-loop feedback control system. A feedback control system compares the
values of these parameters and uses the disparity as a mode of control in order to preserve the predefined
relationship between one system variable and another. The observed output is indeed a good indicator of
the system’s real output when the sensor is correct.

Figure 1.5: Closed-loop control system

A sensor is an apparatus that measures a required external signal. or A sensor is a device that
transforms a physical stimulus into an output that can be read. In a control and automation
system, a sensor’s job is to identify and quantify some physical effect and provide that data to
the control system (Morales-Herrera et al., 2017). For instance, sensors used to quantify temperature
include resistance temperature detectors (RTDs).
An actuator is a device used by the control system to modify or alter the environment. (Dorf and Bishop,
2008) or an actuator is a mechanical part of a machine that moves and controls a system or mechanism,
for instance, by opening a valve. Simply said, it is a ”mover” (Wikipedia contributors, 2023). As seen

7
in figure (1.5), a sensor continuously monitors the output of the procedure, using the controlled variable
in a closed-loop control system as the sensor takes an output sample from the system and turns it into
an electric signal that it sends back to the controller. The controller can make any required adjustments
to maintain the output precisely where it should be since it understands what the actual system is doing.
The feedback path is the signal from the sensor to the controller, which ”closes” the loop. The forward
path is the signal from the controller to the actuator. The feedback signal from the comparator in Figure
1.5(a) is subtracted from the set point (just ahead of the controller). The system error can be calculated by
deducting the desired position (as determined by the set point) from the actual position (as recorded by
the sensor). Between ”where you are” and ”where you want to go,” the error signal shows the disparity.
To minimize this error signal, the controller is constantly operating. When there is zero error, the output
is exactly what the set point dictates it should be. The controller reduces the mistake by employing a
control method which could be straightforward or sophisticated. The controller could switch this actuator
off or on using a straightforward control approach. Despite the additional hardware needed closed- loop
control is preferred to open loop control across many situations due to its self-correcting capability. This
is because even when the system’s individual components in the forward channel are not completely
precisely known closed-loop systems nonetheless deliver dependable repeatable performance

Some examples of closed-loop control system

A closed-loop control system is employed by various types of electrical devices. Therefore, the following
are some areas where the closed-loop control systems are applied:

1. The temperature of the iron’s heating element can be used to automatically control an electric iron.

2. Based on the room’s temperature, the AC’s temperature can be changed.

3. Automatic toaster.

Advantages of closed-loop control system

1. It automatically modifies the input of the system, thereby reducing error in the output.

2. It regulates how sensitive the system is to outside influences.

8
3. Comparatively to an open-loop control system, this approach also lessens disturbance.

disadvantages of closed-loop control system

1. They are more expensive.

2. Designing them is difficult.

3. The main issue is stability, hence extra attention must be taken while designing a reliable closed
loop system.

1.2 Types of feedback control system

Depending on the classification’s goal, feedback control systems can be categorized in a variety of ways.
For instance, according to the method of analysis and design, control systems are classified as

1. Linear control system: If a system satisfies the following criteria, it is said to be linear, i.e.,

Superposition Principle: This means that it is possible to consider each input separately and
then add each result algebraically to produce the answer to multiple inputs. Two qualities represent
the principle of superposition in mathematics. i.e., (additive property and homogeneous property)

additive property: x and y must be within the domain of the function f in order for there to
be an additive property stating that we have f(x + y) = f(x) + f(y)

Homogeneous property : x belongs to the domain of the function f and for any scalar constant
α we have f(αx) = αf(x)

2. Non-Linear control system: this is simply a system that is not linear

3. Single-Input Single-Output (SISO)

4. Multi-Input Multi-Output (MIMO)

5. time-invariant

6. time variant systems.

9
1.2.1 Linear and Nonlinear Control Systems

Physical systems are all nonlinear, yet in many cases the non-linear influence is so small that linear mod-
els can still produce good results. Because many physical systems’ components are imperfect, many of
them exhibit nonlinear behavior. However, a sizable portion of the systems are intentionally designed to
be nonlinear. A nonlinear system frequently outperforms an identical linear system in terms of weight,
cost, reliability, fabrication ease, and performance.

Simple analytical solutions to nonlinear differential equations are challenging to find and only exist for
a few particular classes, as has long been understood. Analysts have been interested in the approximate
solution of nonlinear differential equations for at least a century. Reversion and other well-known series
approximation methods like perturbation are used frequently. Numerous other techniques including
harmonic balancing and parameter change are also employed. The nonlinear variation must typically be
small, slow, or smooth for these strategies to be mathematically justified. The engineer may occasionally
encounter nonlinear control systems where none of these limitations apply and simulation provides the
only workable option. It is clear that the traditional exact solutions are not very useful.
Engineers usually define ”stability” as the characteristic of a system that produces a bounded reaction to
every load disturbance and bounded input, or even though this interpretation is accurate for linear sta-
tionary systems, it can easily result in incorrect results when applied to nonlinear systems. The ”bound-
edness” of the reaction to bounded inputs in nonlinear stationary systems no longer ensures that the
unforced system response will asymptotically return to the equilibrium state over time. That is, asymp-
totic stability does not always imply total stability or stability in the presence of bounded input and/or
load perturbations.
The fact that the stability of an equilibrium state in nonlinear systems isn’t any longer a global concept
but rather only a local system property causes additional complications (i.e., a nonlinear system could
be stable for reasonably small initial disturbances but become unstable due to a reasonably large distur-
bance, and likewise). A nonlinear system could also be stable for some bounded inputs while becoming
unstable for all other bounded inputs.

10
An unstable control system is often worthless and could be harmful, thus finding out if a control system
is stable is the first and most crucial step in determining its various qualities . If beginning the system
somewhere close to its target operating point indicates that it will remain there forever, the system is said
to be stable qualitatively.
A dynamic system’s unstable and stable behavior is usually shown by the movement of a pendulum that
starts near its two equilibrium points, i.e., the vertical up and down position, while a trajectory pertur-
bation caused by a gust produces a major variation in the later flight trajectory is an intuitively relevant
stability problem for aviation control systems. The flight trajectory in this situation with no disturbances
represents the system’s desired functioning point. Every control system has a stability issue whether it is
linear or nonlinear, and this issue needs to be thoroughly researched.

The theory developed by Russian mathematician Alexandr Mikhailovich Lyapunov in the late 19th cen-
tury is the most practical and all-encompassing method for researching the stability of nonlinear control
systems. The General Problem of motion stability by Lyapunov, which was initially published in 1892,
has two techniques for stability analysis: the so-called linearization method and the direct method. The
local stability of a nonlinear system near an equilibrium point is inferred by the linearization approach
from the stability characteristics of the nonlinear system’s linear approximation.
The direct method, which is not limited to local motion, establishes the stability characteristics of a non-
linear system by creating a ”energy- like” scalar function for it and analyzing its temporal variation.
Nevertheless, Lyapunov’s groundbreaking work on stability got little attention outside of Russia for
more than 50 years despite being translated into French in 1908 at Poincare’s suggestion and reprinted
by Princeton University Press in 1947.
Electronic intelligence governs a physical process in a modern control system. Control systems are what
make certain items ”automated ” like automatic washers and automatic pilots. The human operator has
more time because the computer itself is making regular judgments. Machine intelligence is frequently
preferable to direct human control because it can react more quickly or more slowly. Keeping track of
long-term slow changes responding with greater precision and preserving an accurate record of the sys-
tem’s performance.

11
Numerous categories exist for dividing control systems. A regulator system keeps a parameter at or
close to the desired value automatically. A house heating system that maintains a set temperature despite
shifting weather outside is an illustration of this. The output is made to follow a predetermined set path
via a follow-up mechanism. An industrial robot transporting components from one location to another is
an example. A sequential succession of events is under the management of an event control system. A
washing machine rotating through a set of preprogrammed steps is one example.

In systems engineering, particularly in the field of control and automation systems, stability theory is
crucial for both dynamics and control. A dynamical system’s stability with or without control and distur-
bance inputs is a crucial criterion for its usefulness, particularly in the majority of real world applications.
Systems theory and engineering both heavily rely on stability theory. It addresses the behavior of the
system over an extended period. Stability can be described in a number of different ways.
Generally speaking, stability indicates that the system’s outputs and internal signals are constrained
within allowable bounds, using the so-called bounded input/bounded output (BIBO) stability or more
precisely that the system’s outputs tend to a desired equilibrium state through the so- called asymptotic
stability.
As an illustration, we could define by mandating that a system’s output should be in some way ”well
behaved” whenever its input is stable from an input-output perspective can be achieved. As an alternative,
we might examine the asymptotic behavior of the system’s state close to steady state solutions such as
equilibrium points to characterize stability. Any control system’s stability is essential for analysis and
system design. An unstable control system typically offers little to no usefulness. The state and output
variables grow unbounded as time passes when the system is not stable. i.e., there is no control over
the output. The output of a stable system is bounded for bounded inputs. If the poles of a closed-loop
control system lie on the left side of the p-plane, the system is said to be stable. The system is said to
be unstable if the output approaches to infinity for a sufficiently long period of time. When the state of
a given system is displaced (disturbed) from its desired operating point (equilibrium), the expectation is
that the state will eventually revert to the equilibrium, this is known as the asymptotic stability of an
equilibrium (operating point).

12
For instance, if a car is on cruise control and moving at a desired constant speed of 70 mph (which
establishes the operating point or equilibrium condition), perturbations from climbing (descending) hills
will cause the speed to decrease (increase). It is anticipated that the car will return to its desired operating
speed of 70 mph with a well-built cruise control system. The expectation that bounded system inputs
will lead to bounded system outputs, and that minor system changes will have large effects, is another
qualitative description of dynamical systems. If the state gradually converges to the equilibrium set
after starting close by, the system is said to be Lyapunov stable. According to the definition, the state
of the unforced system must eventually converge to zero if the equilibrium of the system is the origin.
Although the term has a broad application, we will restrict our use of it in this work to (linear time
invariant) LTI systems. We also apply a stability concept known as ”Bounded-Input Bounded-Output
(BIBO) stability” for LTI systems. According to this theory, a system is stable if its output stays bounded
for all bounded inputs. Both the aforementioned characteristics of LTI systems seem to be connected to
the system’s state matrix’s eigenvalues, which are the roots of the system’s characteristic equation. The
various system parameters must be regulated in order to provide the desired output. Additionally, for
the output to be unaffected by unwanted changes in the system’s parameters or disturbances, the system
must be stable enough. So, it is safe to conclude that a stable system is one that is built to respond
as expected to changes in its parameters without experiencing any intolerable variance. It should be
noted that stability or instability is a defining attribute of a control system and as such depends on the
system’s closed-loop poles. Because of this, we may state that system stability is a characteristic that
depends less on the input than it does on output. The poles of the applied input, however, determine
the system’s steady state output. Nonlinear differential equations are used to describe nonlinear control
systems. Using the suggested linearization approach is one method of controlling such systems. In that
situation, it is necessary to understand the nominal system trajectories and inputs. In addition, it has been
demonstrated that the linearization method only works when the input and trajectory deviations from the
nominal values are minimal. One must have the ability to resolve nonlinear control system issues in the
general case. Since the middle of the 1980s, several important nonlinear control theory conclusions have
been attained, nonlinear control systems have been a ”hot” topic of research.

13
1.3 Types of system based on stability

According to their stability, the systems can be categorized as follows:

(i) Absolutely stable system : this refers to a system that is stable throughout the whole range of
values for each system component. If every pole of the open-loop transfer function is present on
the left half of the ’s’ plane, the open-loop control system is perfectly stable. In a similar vein,
a closed-loop control system is completely stable if all closed-loop transfer function’s poles are
located in the left side of the ’s’ plane or it is the one that produces a bounded output regardless
of changes in the system’s parameters. This indicates that it is a system of this type whose output,
once it reaches a steady state, does not change as a result of disturbances or changes in the values
of the system parameters. The step reaction of an utterly stable system is depicted in the following
figure:

Figure 1.6: Absolutely stable signal

(ii) Conditionally stable system : A conditionally stable system produces limited output for just
those particular system conditions that are specified by the system parameter. As a result, we may
state that the system only displays stability under specific circumstances and the system produces
unlimited output if that specific condition is broken.

(iii) Marginally stable system : A system is considered moderately stable if it is capable of producing
an oscillating output signal with consistent amplitude and frequency given limited input. If any two
of the open-loop transfer function’s poles are located on the fictitious axis, the open-loop control
system is only moderately stable. If any two of the closed-loop transfer function’s poles are present
on the fictitious axis the closed-loop control system is only partially stable, or given bounded

14
input, a marginally stable system will produce an oscillating signal with constant frequency and
amplitude.

Figure 1.7: Marginally stable signal

1.4 Limit cycles

In phase space, a limit cycle is a closed trajectory with the property that at least one other trajectory
spirals into it as time goes to infinity or as time approaches negative infinity. Alternatively, the limit
cycle is a singular path, with all nearby trajectories approaching the limit cycle, indicating that the limit
cycle is stable or attractive; in other words, as time approaches infinity, all neighboring trajectories are
moving in the direction of the limit cycle. Otherwise, when time approaches negative infinity, all nearby
trajectories approach the limit cycle, indicating that it is unstable (Prieto Guerrero and Espinosa Paredes,
2018).

Figure 1.8: Limit-cycles-attracting-or-and-repelling-neighboring-trajectories

15
1.5 Adaptive control

In designing nonlinear control systems, many methods have been put out in recent years. Amongst which
adaptive control has received a lot of attention, as a way of design which will stabilize nonlinear systems
impacted by unknown circumstances or in the presence of uncertain variables. The term ”adaptive con-
trol” refers to a group of approaches that offer a systematic method for automatic controller adjustment
in real time to reach or maintain a desired level of control system performance when the plant dynamic
model’s parameters are unknown or change over time. First, consider the scenario in which the dynamic
model’s input parameters for the target plant are uncertain yet constant although in a certain domain of
operation.
Even while the design of the controller in these situations won’t depend on the specific values of such
plant design variables. In general, it is still impossible to correctly tune the controller settings without
knowing these values. The controller parameters can be automatically tuned using adaptive control tech-
niques in a closed-loop. In these situations the adaptation’s impact diminishes over time Restarting the
adaption procedure may be necessary if the operating conditions change.
If one takes into account the construction and tuning process of the ”good” controller, one can acquire an
additional understanding about how the adaptive control system functions, illustrated below (Cai et al.,
2013; Landau et al., 2011).

Figure 1.9: An adaptive control system

16
Figure 1.10: principles of controller design

1.6 Definition of terms

i. Cruise control : This is a system that automatically controls the speed of a motor vehicle.

ii. state of a system: state of a system: A collection of numbers such that, when combined with
equations describing the dynamics, knowledge of these numbers and the input function will predict
the system’s future state.

iii. A controller is a device used in control systems that works to reduce the discrepancy between
a system’s desired value and its actual value (i.e., the process variable) (i.e. the setpoint). All
sophisticated control systems use controllers, which are a key component of control engineering.

iv. Bounded signal: This is a signal for which its values are finite at all values of t. E.g., sin(t), cos(t)
or a continuous-time signal x(t) having a finite value at any instant of time, or a bounded signal,
i.e. if x(t) ⩽ K; where K is the finite value for all time t.

v. A control system is referred to as a time-invariant system when its parameters remain constant
during the course of its operation

vi. controllable: A system is said to be controllable at time to if it is possible by means of an


unconstrained control vector to transfer the system from any initial state x(to ) to any other state
t0 in a finite interval of time (Cupelli et al., 2018).

17
vii. observable: A system is said to be observable at time t0 if, for the system in state x(to ), it is
possible to determine this state from the observation of the output over a finite time interval.

viii. Feedback is that property of a closed-loop system which permits the output (or some other con-
trolled variable) to be compared with the input to the system (or an input to some other internally
situated component or subsystem) so that the appropriate control action may be formed as some
function of the output and input.

ix. an equilibrium point x̃ is stable if: ∀ϵ, ∃ δ(ϵ) : |x0 − x̃| < δ =⇒ |x(t) − x̃| < ϵ, ∀ t > 0.

x. Transient time: This is the amount of time needed to transition from one state to another, and the
transient response is the amount of current and voltage during this time.

xi. A transfer function of a system, sub-system, or component in engineering: This is a mathematical


function that theorizes the system’s output for each potential input. It is sometimes referred to as a
system function or network function.
If N(s) and H(s) are used to represent the input and output, respectively, in a Laplace Transform,
the following will be the transfer function:

H(s)
T (s) =
N(s)

The polynomial form of a function can typically be used to express it. for instance

xii. An equilibrium point x̃ ∈ Rn is said to be an isolated equilibrium point if there exists p > 0
such that the neighborhood of x̃ contains no other equilibrium points.

xiii. The state of a dynamic system is the smallest set of variables such that the knowledge of these
variables at t = t0 , together with the knowledge of the input for t ⩾ t0 , completely determines the
behavior of the system at any time t ⩾ t0 .
State of a system: a set of numbers such that the future state of the system can be determined from
the equations defining the dynamics with the use of this number set, the input function, and the
knowledge of these numbers.

18
xiv. A proportional-integral-derivative controller, sometimes known as a PID controller, is indeed a
common type of control loop feedback mechanism. An ”error” value is determined by a PID con-
troller as the discrepancy between the process variable being measured and the required set point.
By modifying the control process inputs, the controller makes an effort to reduce the mistake.

1.7 Objective of the study

Our objective is to design a feedback controller which globally asymptotically stabilize a nonlinear strict
feedback system.

1.8 Scope of study:

The main focus of this research study is back-stepping control design for a strict feedback nonlinear
system that has its equilibrium point at the origin.

19
CHAPTER 2

LITERATURE REVIEW

2.1 Introduction

In this section, we will review some useful literatures related to this research.

In recent times, control techniques like forwarding and backstepping, which call for the direct knowledge
of a Lyapunov function, have developed (Praly et al., 2010).
A cascade of nonlinear controllers is created using the backstepping technique and the Lyapunov func-
tion to achieve these three goals. These goals must be met in tandem in order to ensure the stability of
the entire system. For the construction of Photovoltaic (PV) and Power Factor Correction (PFC) voltage
controllers, the backstepping approach is used to create a nonlinear controller. A form of filtered Propor-
tional Integral (PI) controller is utilized to accomplish direct current (DC) voltage regulation. Averaging
theory is used to formally analyze the system’s stability. Simulation in the MATLAB/Simulink environ-
ment is used to demonstrate the controller performance (Aouadi et al., 2017).
Control in general has been subjected to a wide range of design methodologies, from proportional inte-
gral derivative (PID) control to model predictive control; see, e.g., (Magni et al., 1997).
Two of the more effective techniques are feedback linearization, also known as nonlinear dynamic
inversion (NDI), and linear quadratic (LQ) control .
A linear model is first built in LQ control by linearizing the dynamics around a specific equilibrium
point. This strategy’s key advantage is that it is founded on linear control theory. This enables the de-
signer to use all common frequency analysis tools. The cost is that nonlinear dynamics effects that occur
at a specific moment are ignored in the model and, as a result, are not taken into consideration in the
control design. The adoption of nonlinear design techniques is encouraged by this.
A nonlinear design technique that can explicitly handle these kinds of nonlinearities is feedback lineariza-
tion (Isidori and Kang, 1995). A linear closed-loop system is obtained by using nonlinear feedback to

20
eliminate the nonlinearities’ effects on the controlled variables. This method can also handle the dy-
namics’ variations, allowing for the employment of a single controller. The control community has paid
close attention to this approach, as seen in the works of (Freidovich and Khalil, 2008), (Spong, 1994),
and (Krener, 1999).
A comparison was carried out between nonlinear dynamic inversion (NDI) and backstepping, and it was
concluded that both approaches could result in the same controller and provide the best control perfor-
mance (Wang et al., 2013) .
The system nonlinearities, with their derivatives up to a certain order depending as to how they
enter the dynamics, must be fully characterized in order to execute feedback linearization. Given that
the dynamic forces cannot be exactly represented, this could present a control issue. A linear robust
controller should be added to the feedback linearization controller to achieve resilience to such model
faults, according to (Oriolo et al., 2002).
An option to feedback linearization is a backstepping control design by Petar V. Kokotovic, 1990.
Backstepping eliminates the need for the control law to negate system nonlinearities. Instead, the choice
of how to handle nonlinearities is made during design. A nonlinearity may be kept in the closed-loop
system if it works as stabilizing and is hence advantageous. This makes the system more resistant to
model errors and may need less effort to control it. In this paper, we design a control law based on the
backstepping for a fifth-order nonlinear system in a strict feedback form. There have been some recent
studies that present various mathematical techniques for analyzing the behavior and stability of specific
nonlinear systems (Mann and Shiller, 2008; Martinez et al., 2003). The reasons why some systems are
not stable were also discussed. New methods for stability analysis of a nonlinear system are very ap-
pealing and necessary, especially for system designers (Thomsen et al., 2003). It was also shown how
stability, asymptotic stability, and exponential stability are related. The reasons why some systems are
not stable were also discussed (Aniaku et al., 2019) . System control design and analysis both heav-
ily rely on the Lyapunov stability theory (Marquez, 2003) The availability of potent, reasonably priced
microprocessors in recent years has greatly advanced nonlinear control theory and applications. The
theory of feedback linearization, sliding control, and nonlinear adaptation approaches has advanced sig-
nificantly in recent years. Numerous useful nonlinear control systems have been created for a variety

21
of applications, including digital ”fly-by-wire” flight controls for airplanes, ”drive-by-wire” autos, and
sophisticated robotic and space systems. As a result, nonlinear control is taking up increasingly space in
automatic control engineering and is now a crucial component of control engineers’ core training (Slotin
and Li, 1991). Systems of autonomous and nonautonomous ODEs, as well as the stability characteristics
of their solutions, were explored with some key findings. Additionally, it looks at techniques for deter-
mining the stability of nonlinear systems and organized the equilibrium points (also known as critical
points) of linear systems according to how stable they are. In-depth analysis was done using Liapounov’s
direct technique for the stability of autonomous and nonautonomous equations (Obi, 2013). It introduces
three approaches for studying the stability of nonlinear control systems: the linearization method, the
direct Lyapunov method, and the Popov criterion. These approaches are simplified and tabulated since
stability analysis of nonlinear control systems is a challenging issue in engineering practice (Matousek
et al., 2009). The behavior of the system output to the equilibrium state, or the Lyapunov stability of a
system with respect to its equilibrium of interest, can be roughly described as either gradually approach-
ing the equilibrium (asymptotic stability) or meandering nearby and around it. The orbital stability of
a system output is the resistance of the trajectory to small perturbations (Khalil, 2015; Merkin, 2012).
Brief explanation of Lyapunov’s fundamental stability theory, standards, and methodology, as well as as a
few connected key stability ideas for nonlinear dynamical systems (Chen, 2005). In recent times, control
techniques like forwarding and backstepping, which call for the direct knowledge of such a Lyapunov
function, have developed (Praly et al., 2010).
The results of analysis and design for control systems that are subject to oscillatory inputs, or inputs
with large amplitude and high frequency, are presented in this study. Different example systems give
insight into the outcomes. They expand and recover a range of point stabilization and trajectory track-
ing results employing oscillatory control in terms of design. An adaptive controller that is smooth and
free of singularities is first created for a first-order plant by incorporating a modified Lyapunov function.
Then, utilizing back-stepping design, a modification is performed to high-order nonlinear systems. The
control strategy created ensures the closed-loop adaptive systems’ uniform ultimate boundedness. To
assist in the tuning of the controller, the relationship between the transient performance and the design
parameters is provided. (Zhang et al., 2000). The last decades have seen amazing advancements in linear

22
control systems, and as a result, many of the challenges in this area have been solved. Beyond linear
solutions, the current paper’s focus is broad. The strict design requirements of contemporary technol-
ogy necessitate complex control rules, emphasizing the increasingly prominent role of nonlinear control
systems, the subject of this research. We briefly discuss the historical role of analytical principles in
the design and analysis of nonlinear control systems. Recent improvements in these systems are exam-
ined from an application standpoint along with critical remarks on related difficulties. It is predicted
that further dissemination of this thorough analysis will encourage increased research community col-
laboration and advance new breakthroughs (Iqbal et al., 2017). The work focused on the analysis of
nonlinear differential equations using linearization methods and linear differential equation theory. The
formulation of mathematical models of real-world phenomena frequently takes the form of difficult-to-
solve explicit systems of nonlinear differential equations. The use phase portraiture and stability analysis
was implemented to analyze the nonlinear system solutions qualitatively in order to get over this obsta-
cle. In the study of two systems of nonlinear differential equations, we illustrate various methodologies
(Morgan, 2015). Physics and nature generally follow nonlinear behavior. In general, linear models that
are produced through linearization or identification are rough approximations of the nonlinear behav-
iors of the nearby plants to an operating point. A linear model is frequently insufficient to accurately
reproduce reality in situations like startup, shutdown, or significant transient regimes, and the resulting
linear controller cannot ensure stability and performance. However linear models and linear controllers
predominate by a wide margin because nonlinear control is challenging to handle (Corriou and Pierre,
2017). Nevertheless, if the end users are prepared to put in some work, there are effective strategies that
may be used to nonlinear models. Backstepping, sliding mode control, and other theories are already
in existence (Vidyasagar, 2002; Khalil, 2015). Systems engineering is heavily influenced by stability
theory, particularly in regarding dynamical control systems that combine dynamics and control (Hahn,
1967). Certain necessary and sufficient conditions that guarantee that a linear control system with input
is conclusively observable (Aniaku and Jackreece, 2017). The use of backstepping technique combined
with neural networks is presented as a nonlinear adaptive controller for the quadrotor helicopter. The
backstepping technique is utilized to preserve the stability of pitch and roll angles while achieving good
tracking of target translation positions and yaw angle. Only a few aspects of the quadrotor’s model must

23
be understood by the controller; all physical parameters and the quadrotor’s precise model are not nec-
essary (Madani and Benallegue, 2008). The control of a delta wing aircraft’s wing rock phenomenon
was considered in this article. In order to stabilize the system, a control strategy was suggested. The
controller is a backstepping controller. It appeared that the suggested control solution will ensure that
all the system’s states asymptotically converge to zero. It was discovered that the control strategy is
effective for a delta wing aircraft’s wing rock phenomenon (Alhamdan and Alkandari, ). The stability
of third-order nonlinear ordinary differential equations was examined. A suitable Lyapunov function
was constructed and it was used to demonstrate that it ensures asymptotic stability. The strategy was to
first take into account the linear form of the aforementioned ordinary differential equations (ODEs) by
assuming that the case was considered and researching its Lyapunov stability. A Lyapunov function for
a stability analysis of the provided nonlinear differential equation was considered by making use of the
similarities between linear and nonlinear ODEs (Okereke et al., 2016). In this work, we will extend this
concept to a strict feedback system.

24
CHAPTER 3

METHODOLOGY

3.1 Introduction
For a very long time, both linear and nonlinear control theories have relied heavily on the Lyapunov
theory. Lyapunov function for a particular system, however, can be challenging, which frequently limits
its application in nonlinear control. If such function is found, that system is guaranteed to be stable.
However, it is frequently up to the designer’s creativity and expertise to find such a function.

Backstepping, a technique of nonlinear control design that is systematic and applicable to a wide range
of systems, is used in this context. Backstepping is the name referring to the design process’s recursive
nature. First, a small subsystem is solely taken into consideration, and a ”virtual” control law is de-
veloped. The design is then expanded in a number of steps until a control law for the entire system is
created. A Lyapunov function for the controlled system is incrementally built along with the control law.

The fact that nonlinearities can really be dealt with in a variety of ways is a key aspect of backstepping.
Sector bounded nonlinearities can be handled by utilizing linear control and useful nonlinearities that
operate stabilizing can be kept. In contrast nonlinearities are eliminated by nonlinear feedback in feed-
back linearizing control (Isidori and Kang, 1995).
Maintaining nonlinearities as opposed to canceling them calls for less accurate models and possibly less
control effort. Additionally it is occasionally possible to demonstrate that the resulting control laws are
the best possible ones in terms of a useful performance index which ensures specific robustness features.

3.2 Lyapunov stability theory


Lyapunov theory is the foundation of backstepping control design. The goal is to create a control law
that moves the system to the target state or at least very close to it. In other words, we want to keep
the closed-loop system’s state in stable equilibrium. In this section, we define stability in the Lyapunov
sense and then go over the key methods for demonstrating an equilibrium’s stability. For the proof and
stability theorems see (Slotin and Li, 1991) and (Khalil, 2002).
Firstly, we have to describe the Lyapunov function and explain how to employ it in the stability analysis
of controller design for straightforward first-order systems before introducing the typical back-stepping
method. In control theory and engineering, stability theory is crucial. Stability issues come up in the

25
analysis of system dynamics in a variety of ways. Equilibrium point stability is the main topic of discus-
sion in this study.

A.M. Lyapunov introduced the first and second methods in 1892 as two approaches for finding dynamic
systems’ stability given by ODEs and is often used to describe the stability of equilibrium points.

All approaches that use the explicit form of differential equation solutions for analysis fall under the
first methodology. Contrarily, the second approach does not call for the resolution of the differential
equations. That is, by applying the second Lyapunov approach, we can assess the system’s stability
without having to solve the state equation. This is highly helpful because it can be challenging to solve
nonlinear and/or time-varying state equations.

The Lyapunov stability theorems provide sufficient conditions for stability as well as asymptotic and
other types of stability. It is a necessary and sufficient condition for stability for several classes of ODEs
that Lyapunov functions exist. Although there isn’t a universal method for creating Lyapunov functions
for ODEs. While Lyapunov’s direct method has emerged as the key technique for nonlinear system
analysis and design, Lyapunov’s first method now represent the theoretical basis of linear control. The
so-called Lyapunov stability theorem combines the first approach with the direct method.

Consider an autonomous nonlinear system

ẋ = γ(x(t)), x(0) = x0

where,

x(t) ∈ D ⊆ Rm (3.2.1)

represents the state vector, D an open set which includes the origin, and γ : D → Rm is a continuous
vector field on D. Suppose γ has an equilibrium at x̃ so that γ(x̃) = 0. The following definition describes
the stability characteristics of this equilibrium.

Definition 3.2.1. stable in the sense of Lyapunov: The equilibrium point x̃ of the above system is said
to be

(i) stable if for any given value ϵ > 0, ∃ δ(ϵ) such that

∥x0 − x̃∥ < δ =⇒ ∥x(t) − x̃∥ < ϵ, ∀ t ⩾ 0

26
Figure 3.1: Lyapunov’s stability

(ii) unstable if it is not stable

(iii) asymptotically stable if:


[a.] it is stable in the sense of lyapunov
[b.] ∃ δ(ϵ) such that

∥x0 − x̃∥ < δ(ϵ) =⇒ lim ∥x(t) − x̃∥ = 0


t→∞

Figure 3.2: A pictorial overview of asymptotic stability.

(iv) globally asymptotically stable if for any initial states, it is asymptotically stable, which means that

lim x(t) = x̃ = 0.
t→∞

In these definitions, the trajectory x(t) which is the solution to the above system. In general, an an-
alytical solution to x(t) is difficult, if not impossible. Fortunately, there are alternative ways to demon-
strate stability.

3.3 Lyapunov’s Direct Method


According to classical theory of mechanics, a vibrating system is steady when its amount of energy,
which is a positive definite function, continuously decreases till its equilibrium state is attained. This

27
requires that perhaps its derivative with respect to time of the amount of energy be negative definite.

The second approach is founded on an extension of the following observation: If the system is in an
equilibrium state that is asymptotically stable, the system’s stored energy that has been displaced within
the domain of attraction decreases with the increase of time until it eventually presumes its minimum
amount at the equilibrium state.

However, constructing an ”energy function” for systems that are purely mathematical is not an easy
task. The Lyapunov function, an artificial energy function, was developed by Lyapunov to get around
this problem. But this concept is more universal and applicable than the one of energy. Actually, Lya-
punov functions can be any scalar function that satisfy this hypothesis of Lyapunov’s stability theorem (
see theorem (3.3.1)).

Lyapunov functions depending on y1 , y2 , ..., yn and t. We denote them by V(y1 , y2 , ..., yn , t) or simply
by V(y, t). If a Lyapunov functions does not include t explicitly, we denote it by V(y1 , y2 , ..., yn ) or V(y).
(Ogata, 1999)
In the second Lyapunov technique, we can determine the stability and asymptotic stability of the equilib-
rium point without explicitly solving for the solution but by observing the behavior of the sign of V(y, t)
and that of the derivative with respect to time V̇(y, t) = dV(y, t)/dt.
This method’s fundamental idea is a mathematical elaboration of the subsequent physical system:
Any electrical or mechanical system tends to approach a lesser power configuration if the amount of
energy with positive sign is constantly reducing. In another words, it is not necessary to find the exact
answer for the system in order to make conclusions about the stability of such a system by looking at
the changes of a particular scalar function known as the Lyapunov function. This really is precisely the
method’s strong feature because it eliminates the need to compute the motion equation of x(t) in order
to specify the development of the answer, finding explicit solutions to nonlinear systems is challenging
and occasionally impossible .

We begin by taking a look at a few of the functional characteristics that we’ll utilize to define Lya-
punov functions.

Definition 1.1. Positive definiteness of scalar functions: In a region Ω (which contains the origin
of the state space), a scalar function V(x) is said to be positive definite if the following conditions are
satisfied:

i. If V(x) > 0 ∀ states x in the given region

ii. V(0) = 0

28
Negative definiteness of scalar functions: V(x) is said to be negative definite if −V(x) is positive
definite. A scalar function V(x) is said to be positive semidefinite in a region Ω if it satisfies the following
conditions:

i. If V(x) ⩾ 0 ∀ states x in the given region

ii. V(0) = 0

Theorem 1 (Marquez, 2003) Lyapunov Stability Theorem: Let x̃ = 0 be an equilibrium point of


dx/dt = g(x, t), f : Ω → Rm , and let V : Ω → R be a continuously differentiable function. Then
x̃ = 0 is stable in the sense of Lyapunov if the following are satisfied:

i. V(0) = 0

ii. V(x) > 0 in Ω − {0}

iii. V̇ ⩽ 0 in Ω − {0}

It is frequently crucial to understand the circumstances under which the initial state will tend towards
the equilibrium state whenever the equilibrium is stable asymptotically. Every initial state will, in the
best case scenario, tend to the equilibrium position. Globally asymptotically stable, it describes an
equilibrium state that possesses this characteristic (Marquez, 2003).
Theorem 2 (Marquez, 2003) (Asymptotic Stability Theorem) From the conditions stated in Theorem
(3.3.2), x̃ = 0 is asymptotically stable if V(.) is such that:

i. V(0) = 0

ii. V(x) > 0 in Ω − {0}

iii. V̇ < 0 in Ω − {0}

Theorem 3 The origin x̃ = 0 is globally asymptotically stable (stable in the large) if the following
conditions are satisfied:

i. V(0) = 0

ii. V(x) > 0, x ̸= 0

iii. V̇ < 0, x ̸= 0

iv. V̇(x) is radially unbounded i.e., V(x) → ∞ as ∥x∥ → ∞

29
3.4 Lyapunov Based Control Design
Now let’s discuss the control design based on Lyapunov theory.Let us Consider the system

ẋ = γ(x, u)

here, x is regarded as the system’s state and u is the control input. We aim at designing a feedback
control law
u = c(x)

such that x = 0 is a globally asymptotically stable equilibrium of the feedback system.

ẋ = γ(x, c(x))

In order to ensure globally asymptotic stability, we have to construct a Lyapunov function V(x) that
satisfyies Theorem 3.3.2. Lyapunov control design is simply about designing a feedback control law and
a Lyapunov function V(x).
It is easy to find c(x) by selecting a radially unbounded function V(x) such that it is positive definite, and
then selecting c(x) such that
V̇ = Vx (x)γ(x, c(x)) < 0, x ̸= 0

3.5 Back-stepping
The Lyapunov Based Control Design presupposes that a control Lyapunov function is known in order to
control the system.
What if the opposite is true? How may closed-loop stability be demonstrated using a control law and
a Lyapunov function? For a class of nonlinear systems, backstepping employs a recursive design to
address this issue.
On the basis of Krstic et al. (1995), we go over the backstepping approach in this section, the systems
that may be used with, and the design options.
Back-stepping is a method used in control theory, invented around 1990 by Petar V. Kokotovic and others
(Kokotovic, 1992; Lozano et al., 1992) for creating stabilizing controls for a certain class of nonlinear
dynamical systems . These systems are composed of radiating outward from an irreducible subsystem
which can be stabilized in other ways. (Because they back-step toward the control input beginning with
the scalar equation that is distant from it by the greatest number of integration, they are known as back-
stepping designs). The designer can begin the design process just for known stable system and ”back
out” new controllers which gradually stabilize each outer subsystem because of the recursive nature.
When the last external control is attained, the procedure is finished. As a result, this action is referred to
as back-stepping . Back-stepping is a recursive process that integrates the design of the feedback control

30
with the selection of a Lyapunov function. A design issue for the entire system is divided into a series of
design issues involving lower order (including scalar) subsystems. Back-stepping can frequently handle
tracking, stabilization, and robust control issues under situations that are less constrained than those faced
by other methods by taking use of the added flexibility presented with lower order and scalar subsystems.
The back-stepping method offers a strict feedback form of recursive stabilization of a system’s origin.
Specifically, take into account a system of the kind (Khalil, 2015).





η̇ = f0 (η) + ϕ0 (η)ξ1 ,






ξ̇1 = f1 (η, ξ1 ) + ϕ1 (η, ξ1 )ξ2 ,






ξ̇2 = f2 (η, ξ1 , ξ2 ) + ϕ2 (η, ξ1 , ξ2 )ξ3 ,






.






.


.
(3.3.1)




ξ̇i = fi (η, ξ1 , ξ2 , ..., ξi−1 , ξi ) + ϕ i (η, ξ1 , ξ2 , ..., ξi−1 , ξ )ξ
i i+1 , for 1 ⩽ i < k − 1






.






.






.



ξ̇k−1 = fk−1 (η, ξ1 , ξ2 , ..., ξk−1 ) + ϕk−1 (η, ξ1 , ξ2 , ..., ξi−1 , ξk−1 )ξk ,




ξ̇ = f (η, ξ , ξ , ..., ξ , ξ ) + ϕ (η, ξ , ξ , ..., ξ , ξ )u.
k k 1 2 k−1 k k−1 1 2 k−1 k

where
η ∈ Rm with m ⩾ 1,
ξ1 , ξ2 , ..., ξi , ..., ξk−1 , ξk are scalars,
u is a system’s scalar input,
f0 , f1 , f2 , ..., fi , fk−1 , fk vanish at the origin i.e., fi (0, 0, ..., 0) = 0
ϕ1 , ϕ2 , ..., ϕk−1 , ϕk are non-zero across the field of interest (i.e., ϕi (η, ξ1 , ..., ξk ) ̸= 0 for i ⩽ i ⩽ k)
The essence of referring to such systems as ”strict feedback” is that the non-linearity fi and ϕi
in the ξ̇i (i = 1, 2, ..., k) only depends on η, ξ1 , ξ2 , ..., ξi ; i.e., on the states which are ”fed-back to the
subsystem. (Wikipedia contributors, 2021).”

To illustrate the back-stepping procedure, we begin by examining the simplest instance of (3.3.1), for
which k = 1. It is given by (Khalil, 2002)

η̇ = f(η) + ϕ(η)ξ (3.3.2)


ξ̇ = u (3.3.3)

31
where [ηT , ξ]T ∈ Rm+1 is the real state and u ∈ R is the control input. The functions f : P → Rm and
ϕ : P → Rm are smooth (that is, it has continuous partial derivatives of any order) in a domain P ⊂ Rm
that contains η = 0 and f(0) = 0.

Theorem 3.1. (Backstepping theorem) (Krstić and Kokotović, 1995; Khalil, 2002)
Consider the system given in system (3.3.2) − (3.3.3). Let ϕ(η) be a stabilizing state feedback controller
for equation (3.3.2) with ξ = γ(η) = γ(0) = 0, and V(η) be a Lyapunov candidate function that satisfies
∂V
∂η
η̇ ⩽ −W(η), ∀η ∈ D ⊂ Rm where W(η) is positive definite. Then, the state feedback control law
stabilizes the origin of (3.3.2) − (3.3.3) with V(η) + 21 (ξ − γ(η))2 as a Lyapunov candidate function.
Moreover, if the assumptions are satisfied globally and V(η) is radially unbounded, then the origin is
globally asymptotically stable.
Proof. Our goal is to design a feedback controller to stabilize the origin (i.e., η = 0, ξ = 0) where ξ is
called a virtual controller.
We assume that both f and ϕ are known. Suppose that equation (3.3.2) can be stabilized by a smooth
feedback controller
ξ = γ(η)

with γ(0) = 0 that is, the origin of


η̇ = f(η) + ϕ(η)γ(η)

is asymptotically stable.

Moreover, suppose that we know a (positive definite, smooth) Lyapunov function V(η) whereby the
inequality is satisfied

∂V ∂V
V̇ = η̇ = [f(η) + ϕ(η)γ(η) ⩽ −W(η)] , η∈D (3.3.4)
∂η ∂η

Where W(η) is a positive definite function on Rm .


Without changing the dynamics of the system, we add and subtract ϕ(η)γ(η) on the right hand side of
equation (3.3.2), we obtain

η̇ = f(η) + ϕ(η)ξ + ϕ(η)γ(η) − ϕ(η)γ(η)


η̇ = f(η) + ϕ(η)γ(η) + ϕ(η) [ξ − γ(η)]
ξ̇ = u

We let,

z = ξ − γ(η) (3.3.5)

32
ξ = z − γ(η) (3.3.6)
ξ̇ = ż + γ̇ (3.3.7)

Where z is corresponding error variable.


Substituting equation (3.3.5) − (3.3.7) into equation (3.3.2) − (3.3.3), we get

η̇ = f(η) + ϕ(η)γ(η) + ϕ(η)z (3.3.8)


ż = u − γ̇ (3.3.9)

since we assumed that f, γ and ϕ are known, we can obtain the derivative of ϕ̇ by using expression

∂ϕ ∂ϕ
ϕ̇ = η̇ = [f(η) + ϕ(η)ξ]
∂η ∂η

by letting τ = u − ϕ̇ equation (3.3.8) becomes

ż = τ (3.3.10)

which is related to the original system, except for the fact that now the first system has an asymptotically
stable origin when the input is zero (ie., by construction, when z → 0, η̇ = f(η) + γ(η)ϕ(η)) which is
asymptotically stable (i.e., η → 0). Using this feature, we design a τ for the stabilization of the entire
system. Let,
1
V(η, ξ) = V(η) + z2
2
be our Lyapunov candidate function. Then,

∂V
V̇ = η̇ + zż (3.3.11)
∂η

Substituting equation (3.3.8) and (3.3.10) into equation (3.3.11), we obtain

∂V
V̇ = [f(η) + γ(η)ϕ(η) + ϕ(η)z] + zτ
∂η
∂V ∂V
= [f(η) + γ(η)ϕ(η)] + ϕ(η)z + zτ
∂η ∂η
∂V
⩽ −W(η) + ϕ(η)z + zτ ( from equation (3.3.4)) (3.3.12)
∂η

As a special case, we choose τ which is the (virtual controller) so that we can cancel the positive terms
in equation(3.3.12). We let,

∂V
τ=− ϕ(η)z − kz, k>0 (3.3.13)
∂η

33
Substituting, we have
 
∂V ∂V
= −W(η) + ϕ(η)z + z − ϕ(η)z − kz
∂η ∂η
∴ V̇(η, ξ) ⩽ −W(η) − kz2

Which shows that the origin (η = 0, z = 0) is asymptotically stable . Since ϕ(0) = 0, we conclude that
the origin (η = 0, ξ = 0) is asymptotically stable by Lyapunov stability theory (Khalil, 2002).
Making u the subject of equation (3.3.9) and substituting for τ, z and ϕ̇, we obtain

u = ż + γ̇
= τ + γ̇
∂ϕ(η) ∂V(η)
u= [ϕ(η)ξ + f(η)] − γ(η) − kz
∂η ∂η
∂ϕ(η) ∂V(η)
u= [f(η) + ϕ(η)ξ] − ϕ(η) − k [ξ − γ(η)] (3.3.14)
∂η ∂η

If all assumptions globally are correct and V(η) is radially unbounded


( i.e., V(η) → ∞ as ∥x∥ → ∞), then we can conclude that the origin is globally asymptotically stable

Note: The control law does not necessarily represent the best or the only global stabilizing control law
for (3.3.2) − (3.3.3). The theorem is valuable because it demonstrates that there is at least one globally
stabilizing control law for enhanced systems of this kind (Khalil, 2002).

34
CHAPTER 4

MAIN RESULT

4.1 Introduction
In this section, we analyze some non-linear controls in strict feedback form using the mathematical tools
discussed earlier. Furthermore, we used Matlab to show the effectiveness of the control design approach.

4.2 Application of back-stepping control design to strict feedback


systems
Example 4.1. Consider the strict feedback nonlinear system

ẏ1 = y21 − y31 + y2 , (4.2.1)


ẏ2 = y3 , (4.2.2)
ẏ3 = u. (4.2.3)

Figure 4.1: block diagram of system (4.2.1) - (4.2.3)

Findings:
Employing a back-stepping design approach. For convenience, let

y2 = ϕ1 (y1 ) + z2 (4.2.4)
ẏ2 = ϕ̇1 + ż2 (4.2.5)

Here, z2 , ϕ1 are scalars. Making ż2 the subject , we get

ż2 = ẏ2 − ϕ̇1 (4.2.6)

We substitute (4.2.2) into (4.2.6) for ẋ2 , we get

ż2 = y3 − ϕ̇1 (4.2.7)

35
We substitute equation (4.2.4) into (4.2.1), we obtain

ẏ1 = y21 − y31 + [ϕ1 + z2 ] (4.2.8)

Similarly, let

y3 = ϕ1 (y1 , y2 ) + z3 (4.2.9)
ẏ3 = ϕ̇2 + ż3 (4.2.10)

We substitute (4.2.3) into (4.2.10) for y˙3 , we obtain

u = ϕ̇2 + ż3

Making ż3 the subject, we get

ż3 = u − ϕ̇2 (4.2.11)

Substituting (4.2.9) into (4.2.7), we get

ż2 = ϕ2 + z3 − ϕ̇1 (4.2.12)

Using (4.2.8), (4.2.11), and (4.2.12), the original system is transformed into:

ẏ1 = (y1 )2 − (y1 )3 + [ϕ1 + z2 ] (4.2.13)


ż2 = [ϕ2 + z3 ] − ϕ̇1 (4.2.14)
ẏ3 = u − ϕ̇2 (4.2.15)

We choose the following Lyapunov function candidates:


1 1 1
V1 (y1 ) = y21 , V2 (z2 ) = z22 , V3 (z3 ) = z23
2 2 2
1 1 1
V1 (y1 , z2 , y3 ) = y21 + z22 + z23
2 2 2
d d d d
V(y1 , z2 , z3 ) = V1 (y1 ) + V2 (z2 ) + V3 (z3 )
dt dt dt dt
V̇ = yẏ1 + z2 ż2 + z3 ż3

Substituting for ẏ1 , ż2 and ż3 from (4.2.8), (4.2.12) and (4.2.11) respectively, we obtain
h i
V̇ = y1 y21 − y31 + (ϕ1 + z2 ) + z2 (ϕ2 + z3 ) − ϕ̇1 + z3 (u − ϕ̇2 )
 
h i
= y1 y21 − y31 + ϕ1 + z2 y1 + (ϕ2 + z3 ) − ϕ̇1 + z3 (u − ϕ̇2 )
 

36
h i
V̇ = y1 y21 − y31 + ϕ1 + z2 y1 + ϕ2 − ϕ̇1 + z3 (z2 + u − ϕ̇2 )
 
(4.2.16)

We let −y1 − y31 ≜ y21 − y31 + ϕ1 so that ,

ϕ1 = −y1 − y21 (4.2.17)

Next, we let −z2 ≜ y1 + ϕ2 − ϕ̇1 so that

∂ϕ1 (y1 )
−z2 = y1 − ϕ2 − ẏ1 (4.2.18)
∂y1

Substitute (4.2.17) into (4.2.18) for ϕ1 , we obtain

∂ [−y1 − y21 ]
−z2 = y1 − ϕ2 − ẏ1 (4.2.19)
∂y1
= y1 + ϕ2 − (−1 − 2y1 )ẏ1 (4.2.20)

Substituting (4.2.1) into (4.2.20), we get

−z2 = y1 + ϕ2 − (−1 − 2y1 )(y21 − y31 + y2 ) (4.2.21)

Substituting (4.2.4) into 4.2.21 for z2 , we get

−(y2 − ϕ1 (y1 )) = y1 + ϕ2 − (−1 − 2y1 )(y21 − y31 + y2 ) (4.2.22)


ϕ2 = −y1 + (−1 − 2y1 )(y21 − y31 + y2 ) − (y2 − ϕ1 (y1 )) (4.2.23)

Substituting (4.2.17) into (4.2.23) for ϕ1 , we have

ϕ2 = −y1 + (−1 − 2y1 )(y21 − y31 + y2 ) − (y2 + y1 + y21 )) (4.2.24)

We intend to find u in terms of y1 , y2 and y3 i.e., u(y1 , y2 , y3 ) such that

−z3 ≜ z2 + u − ϕ̇2

Substituting for ϕ̇2 , we obtain


 
∂ϕ2 (y1 , y2 ) ∂ϕ2 (y1 , y2 )
−z3 ≜ z2 + u − ẏ1 + y˙2
∂y1 ∂y2

Making u the subject, we have


 
∂ϕ2 (y1 , y2 ) ∂ϕ2 (y1 , y2 )
u = −z2 + ẏ1 + ẏ2 − z3
∂y1 ∂y2

We back-step to the initial variables. Using specifically, z2 = y2 − ϕ1 (y1 ) and z3 = y3 − ϕ2 (y1 , y2 ),

37
we obtain
 
∂ϕ2 (y1 , y2 ) ∂ϕ2 (y1 , y2 )
u = −y2 − ϕ1 (y1 ) + ẏ1 + ẏ2 − [y3 − ϕ2 (y1 , y2 )] (4.2.25)
∂y1 ∂y2

Substituting (4.2.1) and (4.2.2) into (4.2.24) for ẏ1 and ẏ2 respectively, we obtain
 
∂ϕ2 (y1 , y2 ) 2 3 ∂ϕ2 (y1 , y2 )
u = −y2 − ϕ1 (y1 ) + (y1 − y1 + y2 ) + y3 − [y3 − ϕ2 (y1 , y2 )] (4.2.26)
∂y1 ∂y2

Simplifying (4.2.24), we obtain

ϕ2 (y1 , y2 ) = −2y1 − 2y2 + 2y41 − y31 − 2y1 y2 − 2y21 (4.2.27)

Now,

∂ϕ2 (y1 , y2 )
= −2 − 4y1 − 3y21 + 8y31 − 2y2 (4.2.28)
∂y1
∂ϕ2 (y1 , y2 )
= −2y1 − 2 (4.2.29)
∂y2

Substitute (4.2.27) − (4.2.29) into (4.2.26), we obtain

u = −5y21 − 3y31 − 5y2 + 5y41 − 6y1 y2 + 11y51 − 8y61 + 10y31 y2 − 5y21 y2 − 2y22 − 2y1 y3 − 3y3 − 3y1
(4.2.30)

Substituting for ϕ1 , ϕ2 , ϕ̇1 , z2 and z3 into V and V̇, we obtain


1 1 1 2
V = y21 + (y2 + y1 + y21 )2 + y3 + 2y1 − 2y21 − y31 + 2y41 − 2y1 y2 − 2y2 > 0 (4.2.31)
2 2 2
2
2 4 2 2
V̇ = −y1 − y1 − (y2 + y1 + y1 ) − y3 + 2y1 + 2y21 + y31 − 2y41 + 2y1 y2 − 2y2 < 0 (4.2.32)

Hence, the origin is globally asymptotically stable, i.e V(y) → ∞ as ||y|| → ∞

Example 4.2. We find u(y1 , y2 , y3 ) such that the given system is globally asymptotically stable in R3 .
Here, y1 , y2 and y3 denotes the states inR3 and u denotes the input control of the system.
Consider the strict feedback nonlinear system.

ẏ1 = y21 − (y1 + 1)y2 , (4.3.1)


ẏ2 = y21 + y3 , (4.3.2)
ẏ3 = u. (4.3.3)

Findings:

38
Using a back-stepping design approach. For convenience, let

y2 = ϕ1 (y1 ) + z2 (4.3.4)
ẏ2 = ϕ̇1 + ż2 (4.3.5)

Where z2 , z3 are scalars. Making ż2 the subject , we get

ż2 = ẏ2 − ϕ̇1 (4.3.6)

Substituting (4.3.2) into (4.3.6) for ẏ2 , we get

ż2 = y3 − ϕ̇1 + y21 (4.3.7)

Substitute equation (4.3.4) into (4.3.1), we get

ẏ1 = y21 + (y1 + 1)(ϕ1 + z2 ) (4.3.8)

Similarly, let

y3 = ϕ2 (y1 , y2 ) + z3 (4.3.9)
ẏ3 = ϕ̇2 (y1 , y2 ) + ż3 (4.3.10)

We substitute (4.3.3) into (4.3.10), we get

u = ϕ̇2 (y1 , y2 ) + ż3


ż3 = u − ϕ̇2 (y1 , y2 ) (4.3.11)

Substitute (4.3.9) into (4.3.7) for y3 , we get

ż2 = ϕ̇2 + z3 − ϕ̇1 + y21 (4.3.12)

From (4.3.8), (4.3.11) and (4.3.12), the given system is transformed into:

ẏ1 = (y1 )2 − (y1 + 1)(ϕ1 + z2 ) (4.3.13)


ż2 = ϕ2 + z3 − ϕ̇1 + y21 (4.3.14)
ż3 = u − ϕ̇2 (4.3.15)

We choose the following Lyapunov candidate functions


1 1 1
V1 (y1 ) = y21 , V2 (z2 ) = z22 , V3 (z3 ) = z23
2 2 2

39
1 1 1
V1 (y1 , y2 , y3 ) = y21 + z22 + z23
2 2 2
d d d d
V(y1 , z2 , z3 ) = V1 (y1 ) + V2 (z2 ) + V3 (z3 )
dt dt dt dt

V̇ = yẏ1 + z2 ż2 + z3 ż3 (4.3.16)

h i h i
V̇ = y1 y21 + (y1 + 1)ϕ1 +z2 y21 + ϕ2 − ϕ̇1 + y1 (y1 + 1) +z3 u − ϕ̇2 + z2
 
(4.3.17)
| {z } | {z } | {z }
(i) (ii) (iii)

We aim to make V̇ negative definite. In order to do so, in (4.3.17)(i) we let

−y1 ≜ y21 + (y1 + 1)ϕ1

and solve for ϕ1 . Simplifying, we obtain

−y1 (y1 + 1) = ϕ1 (y1 + 1)


∴ ϕ1 = −y1 (4.3.18)
From (4.4.17), we have that
y1 y21 + (y1 + 1) = −y1 (y1 ) = −y21 < 0
 
(4.3.19)

Next, in equation (4.3.17)(ii) we find ϕ2 (y1 , y2 ). Let,

−z2 ≜ y1 + 2y21 + ϕ2 − ϕ̇1

Rewriting, we have
∂ϕ1 (y1 )
−z2 = y1 + 2y21 + ϕ2 − ẏ1
dy1
where
∂ϕ1 (y1 ) ∂(−y1 )
ẏ1 = ẏ1 = −ẏ1
dy1 dy1
Now, we have
−z2 = y1 + 2y21 + ϕ2 + ẏ1

Substituting for ẏ1 , we have

ϕ2 (y1 , y2 ) = −y1 − 2y21 − (y21 + (y1 + 1)y2 ) − z2

ϕ2 (y1 , y2 ) = −y1 − 2y21 − (y21 + (y1 + 1)y2 ) − (y2 − ϕ1 )

40
Substituting for ϕ1 , we get

ϕ2 (y1 , y2 ) = −y1 − 2y21 − (y21 + (y1 + 1)y2 ) − (y2 + y1 ) (4.3.20)

From (4.3.17), we consider

z2 [y21 − ϕ2 − ϕ̇1 + y1 (y1 + 1)]

Substituting, we get

z2 [y21 − y1 − 2y21 − (y21 + (1 + y1 )y2 ) − y2 − y1 + y1 (y1 + 1) + y21 + y2 (y1 + 1)]


z2 (−y1 − y21 − y2 − y1 + y21 + y1 ) = z2 (−y2 − y1 ) = −(y2 − y1 )2 < 0 (4.3.21)

Next, we find u in terms of y1 , y2 and y3 i.e., u(y1 , y2 , y3 ). Let

−z3 ≜ z2 + u − ϕ̇2

Substituting for ϕ̇2 we have


 
∂ϕ2 (y1 , y2 ) ∂ϕ2 (y1 , y2 )
−z3 = z2 + u − ẏ1 + ẏ2
∂y1 ∂y2
 
∂ϕ2 (y1 , y2 ) ∂ϕ2 (y1 , y2 )
u = −z2 + ẏ1 + ẏ2 − z3
∂y1 ∂y2

Substituting (4.3.4) and (4.3.9) into (4.3.22) for z2 and z3 respectively, we obtain
 
∂ϕ2 2 ∂ϕ2 2
u = −(y2 − ϕ1 ) + (y + (y1 + 1)y2 ) + (y + y3 ) − (y3 − ϕ2 ) (4.3.22)
∂y1 1 ∂y2 1

From (4.3.20) we have

∂ϕ2 ∂
= (−y1 − 2y21 − (y21 + (y1 + 1)y2 ) − (y2 + y1 ))
∂y1 ∂y1
∂ϕ2
= −2 − y2 − 6y1 (4.3.23)
∂y1
∂ϕ2 ∂
= (−y1 − 2y21 − (y21 + (y1 + 1)y2 ) − (y2 + y1 ))
∂y2 ∂y1
∂ϕ2
= −y1 − 2 (4.3.24)
∂y2

Substituting (4.3.23) and (4.3.24) into (4.3.22), we get

u = −(y2 − ϕ1 ) + (−2 − y2 − 6y1 )(y21 + y2 (1 + y1 )) + (−y1 − 2)(y21 + y3 ) − (y3 − ϕ2 ) (4.3.25)

41
In order for u to be a function of y1 , y2 and y3 , we substitute for ϕ1 and ϕ2 , we have

u = (y1 − y2 ) + (−2 − y2 − 6y1 )(y21 + (1 + y1 )y2 ) + (−y1 − 2)(y21 + y3 )


+ −y3 − y1 − 2y21 − (y21 + (1 + y1 )y2 ) − (y2 + y1 )


= − y2 − y1 − 6y31 − y21 y2 − 2y21 − 6y21 y2 − 6y1 y2 − 6y1 y2 − y22 − y22 − 2y1 y2


− 2y2 − y31 − y1 y3 − 2y21 − 2y3 − y3 − y1 − 2y21 − y21 − y1 y2 − y2 − y2 − y1

Simplifying, we obtain

u = − 5y2 − 3y1 − 7y31 − 7y21 y2 − 9y1 y2 − 7y21 − y1 y22 − y22 − y1 y3 − 3y3 (4.3.26)

From (4.3.17), we consider specifically

z3 (u − ϕ̇2 + z2 ) = z3 (−z2 − ϕ̇2 − z3 − ϕ̇2 + z2 ) = −z23 < 0 (4.3.27)

Where,
z3 = y3 − ϕ2 = y3 + y1 + 2y21 + (y21 + (y1 + 1)y2 ) + y2 + y1

making appropriate substitutions, we get


1 1 1
V = y21 + (y2 + y1 )2 + (y3 + y1 + 2y21 + (y21 + (y1 + 1)y2 ) + y2 + y1 )2 > 0 (4.3.28)
2 2 2
2 2 2
V̇ = −y1 − z1 − z3 < 0
V̇ = −y21 − (y2 + y1 )2 − (y3 + y1 + 2y21 + (y21 + (y1 + 1)y2 ) + y2 + y1 )2 < 0 (4.3.29)

Hence, the origin of the given system is globally asymptotically stable. Since theorem (3) is satisfied.

We now extend this concept to a system in R5 .


Example 4.3. Consider the nonlinear system in strict-feedback form.

ẏ1 = y21 − y31 + y2 , (4.4.1)


ẏ2 = y21 y2 + y3 , (4.4.2)
ẏ3 = y22 + y4 , (4.4.3)
ẏ4 = y5 , (4.4.4)
ẏ5 = u. (4.4.5)

Our goal is to find a control u(y1 , y2 , y3 , y4 , y5 ) that will take the states y1 , y2 , y3 , y4 , y5 to the origin as
t → ∞ starting at any given initial value. For convenience, we transform the given system by considering
the following.

42
We define

ϕ1 (y1 ) ≜ ϕ1
ϕ2 (y1 , y2 ) ≜ ϕ2
ϕ3 (y1 , y2 , y3 ) ≜ ϕ3
ϕ4 (y1 , y2 , y3 , y4 ) ≜ ϕ4

Let,

y2 = ϕ 1 + z 2 (4.4.6)
z 2 = y2 − ϕ 1 (4.4.7)
ẏ2 = ϕ̇1 + ż2 (4.4.8)
ż2 = ẏ2 − ϕ̇1 (4.4.9)

Substituting equation (4.4.6) into (4.4.1), we get

ẏ1 = y21 − y31 + ϕ1 + z2 (4.4.10)

let,

y3 = ϕ 2 + z 3 (4.4.11)
ẏ3 = ϕ̇2 + ż3 (4.4.12)

From equation (4.4.9) on substituting for ẏ2 , we get

ż2 = y21 y2 + y3 − ϕ̇1 (4.4.13)

Substituting equation (4.4.11) into (4.4.13), we obtain

ż2 = y21 y2 + ϕ2 + z3 − ϕ̇1 (4.4.14)

Let,

y4 = ϕ 3 + z 4 (4.4.15)
ẏ4 = ϕ̇3 + ż4 (4.4.16)
ż4 = ẏ4 − ϕ̇3 (4.4.17)

43
From equation (4.4.12), we have

ż3 = y22 + y4 − ϕ̇2 (4.4.18)


ż3 = y22 + ϕ3 + z4 − ϕ̇2 (4.4.19)

Let,

y5 = ϕ 4 + z 5 (4.4.20)
ẏ5 = ϕ̇4 + ż5 (4.4.21)

From equation (4.4.17) on substituting for ẏ4 , we have

ż4 = y5 − ϕ̇3 (4.4.22)

Let,

y5 = ϕ 4 + z 5 (4.4.23)
ẏ5 = ϕ̇4 + ż5 (4.4.24)

Substitute equation (4.4.23) into (4.4.22), we have

ż4 = ϕ4 + z5 − ϕ̇3 (4.4.25)


ż5 = ẏ5 − ϕ̇4
ż5 = u − ϕ̇4 (4.4.26)

Hence, from equations (4.4.10), (4.4.14), (4.4.19), (4.4.25), and (4.4.26) the given equation is trans-
formed into

ẏ1 = y21 − y31 + ϕ1 + z2


ż2 = y21 y2 + ϕ2 + z3 − ϕ̇1
ż3 = y22 + ϕ3 + z4 − ϕ̇2
ż4 = ϕ4 + z5 − ϕ̇3
ż5 = u − ϕ̇4

We choose the following Lyapunov candidate functions


1 1 1 1 1
V(y1 , z2 , z3 , z4 , z5 ) = y21 + z22 + z23 + z24 + z25 (4.4.27)
2 2 2 2 2
V̇ = y1 ẏ1 + z2 ż2 + z3 ż3 + z4 ż4 + z5 ż5 (4.4.28)

44
Substituting equations (4.4.10), (4.4.14), (4.4.19), (4.4.25) and (4.4.26) into (4.4.28) for ẏ1 , ż2 , ż3 , ż4 and ż5
respectively, we obtain
 2 h i h i
3 2 2

V̇ = y1 y1 − y1 + ϕ1 + z2 + z2 y1 y2 + ϕ2 + z3 − ϕ̇1 + z3 y2 + ϕ3 + z4 − ϕ̇2
h i h i
z4 ϕ4 + z5 − ϕ̇3 + z5 u − ϕ̇4

Re-arranging terms,
h i h i
V̇ = y1 y21 − y31 + ϕ1 + z2 y1 + y21 y2 + ϕ2 − ϕ̇1 + z3 y22 + ϕ3 + z2 − ϕ̇2
 
| {z } | {z } | {z }
(i) (ii) (iii)
h i h i (4.4.29)
+ z4 ϕ4 + z3 − ϕ̇3 + z5 z4 + u − ϕ̇4
| {z } | {z }
(iv) (v)

Here, we aim to make V̇ negative definite. In order to do so, we consider each term in equation (4.4.29).
We consider equation (4.4.29)(i) separately. One of the ways to make it negative definite is to make

y21 − y31 + ϕ1 ≜ −y1 − y31

Then,
ϕ1 ≜ −y1 − y31 − y21 + y31

ϕ1 = −y1 − y21 (4.4.30)

Substituting equation (4.4.30) into (4.4.29)(i), we have

y1 y21 − y31 − y1 − y21 = y1 −y31 − y1 = − y21 + y41 < 0, ∀y1 ̸= 0


     

Next, we consider (4.4.29)(ii). We can achieve negative definiteness by letting

y1 + y21 y2 + ϕ2 − ϕ̇1 ≜ −z2

Then,

ϕ2 = −z2 + ϕ̇1 − y1 − y21 y2 (4.4.31)

From equation (4.4.1) and (4.4.30), we have

ϕ̇1 = (−1 − 2y1 )(y21 − y31 + y2 ) (4.4.32)


z 2 = y2 − ϕ 1
= y2 − (−y1 − y21 )

45
z2 = y2 + y1 + y21 (4.4.33)

Substituting equations (4.4.32), (4.4.33) into (4.4.31), we have

ϕ2 = 2y41 − 2y2 − 2y1 y2 − y21 y2 − 2y21 − y31 − 2y1 (4.4.34)

Substituting equations (4.4.32) and (4.4.34) into (4.4.29)(ii), we have

y2 + y1 + y21 y1 + y21 y2 + 2y41 − 2y2 − 2y1 y2 − y21 y2 − 2y21 − y31 − 2y1 − (−1 − 2y1 )(y21 − y31 + y2 )
  
2
= y2 + y1 + y21 < 0, ∀y1 , y2 ̸= 0


Next, we consider (4.4.29)(iii). This can be negative definite by letting

y22 + z2 + ϕ3 − ϕ̇2 ≜ z3
ϕ3 = −z3 − y22 − z2 + ϕ̇2 (4.4.35)

From (4.4.11), we have

z3 = y3 − 2y41 + 2y2 + 2y1 y2 + y21 y2 + 2y21 + y31 + 2y1 (4.4.36)


∂ϕ2 ∂ϕ2
ϕ̇2 = ẏ1 + ẏ2
∂y1 ∂y2

∂ [2y41 − 2y2 − 2y1 y2 − y21 y2 − 2y21 − y31 − 2y1 ]  2


y1 − y31 − y2

ϕ̇2 =
∂y1
4
∂ [2y1 − 2y2 − 2y1 y2 − y21 y2 − 2y21 − y31 − 2y1 ]  2 
+ y1 y 2 + y3
∂y2

ϕ̇2 = 6y31 y2 − 2y3 − 4y1 y2 − 2y1 y3 − 2y1 y22 − 7y21 y2 − y21 y3 − 2y2 + y41 y2
− 2y21 − 2y31 − 2y22 + y41 + 11y51 − 8y61 (4.4.37)

Making appropriate substitutions, we get

ϕ3 = 6y31 y2 − 3y2 − 3y3 − 6y1 y2 − 2y1 y3 − 2y1 y22 − 8y21 y2 − y21 y3 − y1 − y41 y2
− 3y21 − 3y31 − 3y22 + 3y41 + 11y51 − 8y61 (4.4.38)

Making appropriate substitutions, we obtain


h i
y2 + ϕ3 + z2 − ϕ̇2 = −(2y1 + 2y2 + y3 + 2y1 y2 + y21 y2 + 2y21 + y31 − 2y41 )2 < 0, ∀y1 , y2 , y3 ̸= 0
2

(4.4.39)

46
We continue to make (4.4.29)(iv) negative definite. This can only be possible by letting

z3 + ϕ4 − ϕ̇3 ≜ −z4
ϕ4 = −z4 − z3 + ϕ̇3 (4.4.40)

From equation (4.4.15), we have

z4 = −6y31 y2 + 3y2 + 3y3 + 6y1 y2 + 2y1 y3 + 2y1 y22 + 8y21 y2 + y21 y3 + y1 + y41 y2
+ 3y21 + 3y31 + 3y22 − 3y41 − 11y51 − 8y61 + y4 (4.4.41)

∂ϕ3 ∂ϕ3 ∂ϕ3


ϕ̇3 = ẏ1 + ẏ2 + ẏ3
∂y1 ∂y2 ∂y3

Making appropriate substitutions, we obtain

ϕ̇3 = 9y21 y22 − 3y3 − 3y4 − y2 + 2y31 y22 − 6y1 y2 − 6y1 y3 − 2y1 y4 − 8y2 y3 − 18y1 y22 − 18y21 y2
− 10y21 y3 − 4y31 y2 − y21 y4 + 6y31 y3 + 81y41 y2 + 3y41 y3 − 56y51 y2 − 3y61 y2 − y21
− 5y31 − 9y22 − 3y41 − 2y32 + 21y51 + 43y61 − 103y71 + 48y81 − 6y1 y2 y3 (4.4.42)

Substituting equations (4.4.36), (4.4.41), and (4.4.42) into equation (4.4.40), we obtain

ϕ4 = 6y31 y2 − 3y2 − 3y3 − 6y1 y2 − 2y1 y3 − 2y1 y22 − 8y21 y2 − y21 y3 − y1 − y41 y2
− 3y21 − 3y31 − 3y22 + 3y41 + 11y51 + 8y61 − y4 − y3 + 2y41 − 2y2 + 2y1 y2 − y21 y2 − 2y21
− y31 − 2y1 + 9y21 y22 − 3y3 − 3y4 − y2 + 2y31 y22 − 6y1 y2 − 6y1 y3 − 2y1 y4 − 8y2 y3
− 18y1 y22 − 18y21 y2 − 10y21 y3 − 4y31 y2 − y21 y4 + 6y31 y3 + 81y41 y2 + 3y41 y3 − 56y51 y2 − 3y61 y2 − y21
− 5y31 − 9y22 − 3y41 − 2y32 + 21y51 + 43y61 − 103y71 + 48y81 − 6y1 y2 y3

Simplifying, we get

ϕ4 = 9y21 y22 − 6y2 − 7y3 − 4y4 − 3y1 + 2y31 y22 − 14y1 y2 − 8y1 y3 − 2y1 y4 − 8y2 y3 − 20y1 y22 − 27y21 y2
− 11y21 y3 + 2y31 y2 − y21 y4 + 6y31 y3 + 82y41 y2 + 3y41 y3 − 56y51 y2 − 3y61 y2 − 6y21 − 9y31 − 12y22
+ 2y41 − 2y32 + 32y51 + 35y61 − 103y71 + 48y81 − 6y1 y2 y3 (4.4.43)

h i 
z4 z3 + ϕ4 − ϕ̇3 = − −6y31 y2 + 3y2 + 3y3 + 6y1 y2 + 2y1 y3 + 2y1 y22

+ 8y21 y2 + y21 y3 + y1 + y41 y2 + 3y21


2
+ 3y31 + 3y22 − 3y41 − 11y51 − 8y61 + y4 < 0 ∀ y1 , y2 , y3 , y4 ̸= 0 (4.4.44)

47
h i
We consider (4.4.29)(v) i.e., z5 z4 + u − ϕ̇4 . In order for this to be negative definite, we let

z4 + u − ϕ̇4 ≜ −z5
u = −z5 − z4 + ϕ̇4 (4.4.45)

From equation (4.4.23), we get

z 5 = y5 − ϕ 4

Making appropriate substitutions, we obtain

z5 = 3y1 + 6y2 + 7y3 + 4y4 + y5 − 9y21 y22 − 2y31 y22 + 14y1 y2 + 8y1 y3 + 2y1 y4 + 8y2 y3
+ 20y1 y22 + 27y21 y2 + 11y21 y3 − 2y31 y2 + y21 y4 − 6y31 y3 − 82y41 y2 − 3y41 y3 + 56y51 y2
+ 3y61 y2 + 6y21 + 9y31 + 12y22 − 2y41 + 2y32 − 32y51 − 35y61 + 103y71 − 48y81 + 6y1 y2 y3 (4.4.46)

∂ϕ3 ∂ϕ3 ∂ϕ3 ∂ϕ4


ϕ̇4 = ẏ1 + ẏ2 + ẏ3 + ẏ4
∂y1 ∂y2 ∂y3 ∂y4

ϕ̇4 = 332y31 y22 − 6y3 − 7y4 − 4y5 − 49y21 y22 − 3y2 − 271y41 y22 − 20y51 y22 − 12y1 y2 − 14y1 y3 − 8y1 y4
− 32y2 y3 − 2y1 y5 − 10y2 y4 − 62y1 y22 − 47y21 y2 + 12y1 y32 − 6y1 y23 − 35y21 y3 − 46y31 y2
− 13y21 y4 − 12y31 y3 − 12y22 y3 + 193y41 y2 − y21 y5 + 6y31 y4 + 122y41 y3 + 534y51 y2 + 5y41 y4
− 62y51 y3 − 1247y61 y2 − 15y61 y3 + 590y71 y2 + 15y81 y2 − 3y21 − 9y31 − 21y22 − 15y41 − 28y32 − 8y23
+ 35y51 + 152y61 + 50y71 − 931y81 + 1105y91 − 384y10 2 3
1 + 22y1 y2 y3 + 16y1 y2 y3 − 62y1 y2 y3 − 8y1 y2 y4
(4.4.47)

Making appropriate substitutions, we obtain

u = 334y31 y22 − 12y2 − 16y3 − 12y4 − 5y5 − 40y21 y22 − 4y1 − 271y41 y22 − 20y51 y22 − 32y1 y2 − 24y1 y3
− 10y1 y4 − 40y2 y3 − 2y1 y5 − 10y2 y4 − 84y1 y22 − 82y21 y2 + 12y1 y32 − 6y1 y23 − 47y21 y3
− 38y31 y2 − 14y21 y4 − 6y31 y3 − 12y22 y3 + 276y41 y2 − y21 y5 + 6y31 y4 + 125y41 y3 + 478y51 y2
+ 5y41 y4 − 62y51 y3 − 1250y61 y2 − 15y61 y3 + 590y71 y2 + 15y81 y2 − 12y21 − 21y31 − 36y22
− 10y41 − 30y32 − 8y23 + 78y51 + 179y61 − 53y71 − 883y81 + 1105y91 − 384y10 2
1 + 22y1 y2 y3

+ 16y31 y2 y3 − 68y1 y2 y3 − 8y1 y2 y4 (4.4.48)

48
From equation (4.4.29)(v), we obtain
 2
3y1 + 6y2 + 7y3 + 4y4 + y5 − 9y21 y22 − 2y31 y22 + 14y1 y2
 
 + 8y y + 2y y + 8y y + 20y y2 + 27y2 y 
 1 3 1 4 2 3 1 2 1 2 
 
h i  + 11y2 y3 − 2y3 y2 + y2 y4 − 6y3 y3 − 82y4 y2 
1 1 1 1 1
z5 z4 + u − ϕ̇4 = −   <0 (4.4.49)
 
 − 3y4 y3 + 56y5 y2 + 3y6 y2 + 6y2 + 9y3 + 12y2 
 1 1 1 1 1 2 
4 3 5 6 7
 
 − 2y1 + 2y2 − 32y1 − 35y1 + 103y1 
 
− 48y81 + 6y1 y2 y3

∀ y1 , y2 , y3 , y4 , y5 ̸= 0
From equation (4.4.27), we have

1 1 2 1  2
V(y1 , z2 , z3 , z4 , z5 ) = y21 + y2 + y1 + y21 + y3 − 2y41 + 2y2 + 2y1 y2 + y21 + 2y21 + y31 + 2y1
2 2 2
1
+ −6y31 y2 + 3y2 + 3y3 + 6y1 y2 + 2y1 y3 + 2y1 y22 + 8y21 y2 + y21 y3 + y1 + y41 y2 + 3y21
2
2 
1
+ 3y1 + 3y2 − 3y1 − 11y1 − 8y1 + y4 + 3y1 + 6y2 + 7y3 + 4y4 + y5 − 9y21 y22
3 2 4 5 6
2
− 2y31 y22 + 14y1 y2 + 8y1 y3 + 2y1 y4 + 8y2 y3 + 20y1 y22 + 27y21 y2 + 11y21 y3 − 2y31 y2 + y21 y4
− 6y31 y3 − 82y41 y2 − 3y41 y3 + 56y51 y2 + 3y61 y2 + 6y21 + 9y31 + 12y22 − 2y41 + 2y32 − 32y51
2
6 7 8
− 35y1 + 103y1 − 48y1 + 6y1 y2 y3

>0 ∀ y1 , y2 , y3 , y4 , y5 ̸= 0 (4.4.50)

V̇ = −y21 − z22 − z23 − z24 − z25 < 0

2  2
V̇(y1 , z2 , z3 , z4 , z5 ) = −y21 − y2 + y1 + y21 − y3 − 2y41 + 2y2 + 2y1 y2 + y21 + 2y21 + y31 + 2y1


− −6y31 y2 + 3y2 + 3y3 + 6y1 y2 + 2y1 y3 + 2y1 y22 + 8y21 y2 + y21 y3 + y1 + y41 y2
2 
+ 3y1 + 3y1 + 3y2 − 3y1 − 11y1 − 8y1 + y4 − 3y1 + 6y2 + 7y3 + 4y4 + y5 − 9y21 y22
2 3 2 4 5 6

− 2y31 y22 + 14y1 y2 + 8y1 y3 + 2y1 y4 + 8y2 y3 + 20y1 y22 + 27y21 y2 + 11y21 y3 − 2y31 y2
+ y21 y4 − 6y31 y3 − 82y41 y2 − 3y41 y3 + 56y51 y2 + 3y61 y2 + 6y21 + 9y31 + 12y22 − 2y41 + 2y32
2
5 6 7 8
− 32y1 − 35y1 + 103y1 − 48y1 + 6y1 y2 y3

<0 ∀ y1 , y2 , y3 , y4 , y5 ̸= 0 (4.4.51)

49
Since theorem (3) is satisfied. Then, it is globally asymptotically stable at the origin.

4.3 Simulation results


Simulations are carried out in MATLAB to validate the suggested control design technique and confirm
its efficacy.
1.5
y1

1 y2
y3

0.5

-0.5

-1

-1.5

-2
0 1 2 3 4 5 6 7 8 9 10

Figure 4.2: Graphs depicting each system’s globally asymptotically stable state for example
(4.3)

50
Figure 4.3: Graphs depicting each system’s globally asymptotically stable state for example
(4.4)

51
52
5000
y
1
y2
4000 y
3
y4
y5
3000

2000

1000

-1000

-2000

-3000

-4000
0 2 4 6 8 10 12 14 16 18 20

Figure 4.4: Graphs depicting each system’s globally asymptotically stable state for example (4.5)

53
CHAPTER 5

CONCLUSION

In this research project, we discuss a back-stepping design-based control strategy for a strict-feedback
nonlinear system that includes a controller and a Lyapunov function. To cancel the nonlinear effects and
achieve asymptotic stability, the backstepping controller iteratively applies Lyapunov functions at each
integrator level. The given system was plotted with our control in Matlab to verify the behavior of the
system; from the graph, it was seen that all states converge to zero from any initial point as time tends
to infinity, which guarantees that the equilibrium point which is at the origin is globally asymptotically
stable. This research work can also be extended from general backstepping to adaptive backstepping in
the case of uncertainties.

54
REFERENCES

Alhamdan, M. and Alkandari, M. Backstepping linearization controller of the delta wing rock phenom-
ena.

Aniaku, S. E. and Jackreece, P. C. (2017). A necessary and sufficient condition for linear systems to be
observable. IOSR Journal of Mathematics, 13:6–10.

Aniaku, S. E., Nwaoburu, A., and Mbah, E. (2019). Non linear dynamical control systems and stability.
Global Journal of Pure and Applied Mathematics, 15:185–190.

Aouadi, C., Abouloifa, A., Lachkar, I., Aourir, M., Boussairi, Y., and Hamdoun, A. (2017). Nonlin-
ear controller design and stability analysis for single-phase grid-connected photovoltaic systems.
International Review of Automatic Control (IREACO), 10:306.

Cai, J., Wen, C., Su, H., and Liu, Z. (2013). Robust adaptive failure compensation of hysteretic actuators
for a class of uncertain nonlinear systems. IEEE Transactions on Automatic Control, 58(9):2388–
2394.

Chen, G. (2005). Stability of nonlinear systems. Encyclopedia of RF and Microwave Engineering.

Corriou and Pierre, J. (2017). Coulson and richardson’s chemical engineering: Volume 3b: Process
control fourth edition,.

Cupelli, M., Riccobono, A., Mirz, M., Ferdowsi, M., and Monti, A. (2018). Modern Control of DC-
Based Power Systems. Elsevier.

Dorf, R. C. and Bishop, R. H. (2008). Modern control systems. Pearson Prentice Hall.

Dukkipati, R. V. (2006). Analysis and design of control systems using MATLAB. New Age International.

Franklin, G. F., Powell, J. D., Emami-Naeini, A., and Powell, J. D. (2002). Feedback control of dynamic
systems, volume 4. Prentice hall Upper Saddle River.

Freidovich, L. B. and Khalil, H. K. (2008). Performance recovery of feedback-linearization-based de-


signs. IEEE Transactions on automatic control, 53(10):2324–2334.

Hahn, W. (1967). The direct method of liapunov. In Stability of Motion, pages 93–165. Springer.

Hsu, H. P. (1995). Schaum’s outlines of theory and problems of signals and systems. McGraw-Hill.

55
Iqbal, J., Ullah, M., Khan, S. G., Khelifa, B., and Ćuković, S. (2017). Nonlinear control systems - a brief
overview of historical and recent advances. Nonlinear Engineering, 6:301–312.

Isidori, A. and Kang, W. (1995). H/sub/spl infin control via measurement feedback for general nonlinear
systems. IEEE Transactions on Automatic Control, 40(3):466–472.

Khalil, H. K. (2002). Nonlinear systems third edition. Patience Hall, 115.

Khalil, H. K. (2015). Nonlinear control, volume 406. Pearson New York.

Kilian, C. T. (2001). Modern control technology: components and systems. Delmar Thomson Learning.

Kokotovic, P. V. (1992). The joy of feedback: nonlinear and adaptive. IEEE Control Systems Magazine,
12(3):7–17.

Krener, A. (1999). Feedback linearization. Mathematical control theory, pages 66–98.

Krstić, M. and Kokotović, P. V. (1995). Control lyapunov functions for adaptive nonlinear stabilization.
Systems & Control Letters, 26(1):17–23.

Landau, I. D., Lozano, R., M’Saad, M., and Karimi, A. (2011). Adaptive control: algorithms, analysis
and applications. Springer Science & Business Media.

Lozano, R., Brogliato, B., et al. (1992). Adaptive control of robot manipulators with flexible joints. IEEE
Transactions on Automatic Control, 37(2):174–181.

Madani, T. and Benallegue, A. (2008). Adaptive control via backstepping technique and neural net-
works of a quadrotor helicopter. IFAC Proceedings Volumes, 41(2):6513–6518. 17th IFAC World
Congress.

Magni, J.-F., Bennani, S., and Terlouw, J. (1997). Robust flight control: a design challenge, volume 110.
Springer.

Mann, M. and Shiller, Z. (2008). Dynamic stability of off-road vehicles: Quasi-3d analysis. 2008 IEEE
International Conference on Robotics and Automation, pages 2301–2306.

Marquez, H. J. (2003). Nonlinear control systems: analysis and design, volume 161. John Wiley
Hobokenˆ eN. JNJ.

Martinez, S., Cortes, J., and Bullo, F. (2003). Analysis and design of oscillatory control systems. IEEE
Transactions on Automatic Control, 48:1164–1177.

Matousek, R., Member, IAENG, and Svarc, I. (2009). Simple methods for stability analysis of nonlinear
control systems.

56
Merkin, D. R. (2012). Introduction to the Theory of Stability, volume 24. Springer Science & Business
Media.

Morales-Herrera, R., Fernández-Caballero, A., Somolinos, J. A., and Sira-Ramı́rez, H. (2017). Integra-
tion of sensors in control and automation systems.

Morgan, R. (2015). Linearization and stability analysis of nonlinear problems.

Obi, O. A. (2013). Stability of autonomous and non autonomous differential equations.

Ogata, K. (1999). Modern control engineering. Book Reviews, 35(1181):1184.

Okereke, R. et al. (2016). Lyapunov stability analysis of certain third order nonlinear differential equa-
tions. Applied Mathematics, 7(16):1971.

Oriolo, G., De Luca, A., and Vendittelli, M. (2002). Wmr control via dynamic feedback linearization:
design, implementation, and experimental validation. IEEE Transactions on control systems tech-
nology, 10(6):835–852.

Praly, L., Carnevale, D., and Astolfi, A. (2010). Integrator forwarding via dynamic scaling. In 49th IEEE
Conference on Decision and Control (CDC), pages 5955–5960. IEEE.

Prieto Guerrero, A. and Espinosa Paredes, G. (2018). Linear and non-linear stability analysis in boiling
water reactors.

Slotin, J. and Li, W. (1991). Applied nonlinear control, printice-hall of india pvt. limited. New Delhi.

Spong, M. W. (1994). Partial feedback linearization of underactuated mechanical systems. In Proceed-


ings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’94), volume 1,
pages 314–321. IEEE.

Thomsen, J. J., Thomsen, J. J., and Thomsen, J. (2003). Vibrations and stability, volume 2. Springer.

Vidyasagar, M. (2002). Nonlinear systems analysis. SIAM.

Wang, J., Holzapfel, F., and Peter, F. (2013). Comparison of nonlinear dynamic inversion and backstep-
ping controls with application to a quadrotor. In CEAS Euro GNC Conference, Delft Netherlands,
pages 1245–1263.

Wikipedia contributors (2021). Backstepping — Wikipedia, the free encyclopedia. [Online; accessed
12-January-2023].

Wikipedia contributors (2023). Actuator — Wikipedia, the free encyclopedia. [Online; accessed 15-
January-2023].

57
Zhang, T., Ge, S. S., and Hang, C. C. (2000). Adaptive neural network control for strict-feedback
nonlinear systems using backstepping design. Automatica, 36(12):1835–1846.

58

You might also like