Control of Crane System
Control of Crane System
SUBMITTED BY
MOUMITA PAUL- 116860970
NAMAN GUPTA - 116949330
SUBMITTED TO:
Dr. Waseem Malik
Introduction
Variables:
A) EQUATIONS OF MOTION
Now using the above assumptions, we obtain the equations of motion of the system and thus
obtain the equations of the non-linear system
Analytical modeling of the given cart and pendulum system can be done by two techniques;
Newton-Euler and Euler-Lagrange method. In this project, Euler-Lagrange technique is used
which requires Kinetic and Potential Energies for formulating equations of motion of the
system. The Euler-Lagrange equation is given as,
𝑑 𝜕𝓛 𝜕𝓛
[ ]− =𝑓
𝑑𝑡 𝜕𝑞̇ 𝜕𝑞
Lagrangian equation
The generalized coordinates in our given system are 𝑥, θ1 𝑎𝑛𝑑 θ2. Using the frame of
reference shown in Fig., position of mass 1 and mass 2 can be computed as follows,
𝑋𝑚1 = 𝑥 − 𝑙1 𝑆1
𝑌𝑚1 = −𝑙1 𝐶1
𝑋𝑚2 = 𝑥 − 𝑙2 𝑆2
𝑋𝑚2 = −𝑙2 𝐶2
The displacement of cart is only in positive-x direction. So, the velocity of cart is given by 𝑥̇ .
Using Eqn. () and Eqn. (), the Kinetic Energy of the system can be written as sum of the
kinetic energy associated with the cart as well as both the pendulums.
1 2 1 1
𝐾. 𝐸. = 𝑥̇ (𝑀) + (𝑚1 𝑣1 2 ) + (𝑚2 𝑣2 2 )
2 2 2
Upon substituting the values of 𝑥, 𝑣1 , 𝑣2 in the above equation, we get the following result.
1 2 1 2 1 2
= 𝑥̇ (𝑚1 + 𝑚2 + 𝑀) + (𝑚1 𝑙1 𝜃̇1 ) + (𝑚2 𝑙2 𝜃̇2 ) − 𝑚1 𝑙1 𝑐𝑜𝑠1 𝑥̇ 𝜃1̇ − 𝑚2 𝑙2 𝐶2 𝑥̇ 𝜃2̇
2 2 2
For computing the Potential Energy of the system, the cart height is taken as reference.
Therefore, it consists of components from the pendulums only and is given by,
𝑃. 𝐸. = − 𝑚1 𝑔𝑙1 𝐶1 − 𝑚2 𝑔𝑙2 𝐶2
𝓛 = 𝐾. 𝐸. −𝑃. 𝐸.
1 2 1 2 1 2
𝓛= 𝑥̇ (𝑚1 + 𝑚2 + 𝑀) + (𝑚1 𝑙1 𝜃̇1 ) + (𝑚2 𝑙2 𝜃̇2 ) − 𝑚1 𝑙1 𝑐𝑜𝑠1 𝑥̇ 𝜃1̇ − 𝑚2 𝑙2 𝐶2 𝑥̇ 𝜃2̇
2 2 2
+ 𝑚1 𝑔𝑙1 𝐶1 + 𝑚2 𝑔𝑙2 𝐶2
Next, we compute the derivative of the Lagrangian with respect to 𝑥̇ , θ1̇ , θ2̇
𝑑 ∂𝓛 ∂𝓛
[ ]− =F
𝑑𝑡 ∂𝑥̇ ∂𝑥
𝑑 ∂𝓛 ∂𝓛
[ ]− =0
𝑑𝑡 ∂θ1̇ ∂θ1
𝑑 ∂𝓛 ∂𝓛
[ ]− =0
𝑑𝑡 ∂θ2̇ ∂θ2
∂𝓛
= 𝑥̇ (𝑚1 + 𝑚2 + 𝑀) − 𝑚1 𝑙1 𝐶1 (𝜃1̇ ) − 𝑚2 𝑙2 𝐶2 (𝜃2̇ )
∂𝑥̇
∂𝓛
= 0
∂𝑥
𝑑 ∂𝓛 2 2
[ ] = 𝑥̈ (𝑚1 + 𝑚2 + 𝑀) − 𝑚1 𝑙1 (θ1̈ 𝐶1 − 𝑆1 𝜃̇1 ) − 𝑚2 𝑙2 (θ2̈ 𝐶2 − 𝑆2 𝜃̇2 ) = F
𝑑𝑡 ∂𝑥̇
𝜕𝓛
= 𝑚1 𝑙1 2 𝜃1̇ − 𝑚1 𝑙1 𝐶1 𝑥̇
̇
𝜕𝜃1
𝑑 𝜕𝓛
( )= 𝑚1 𝑙1 2 𝜃1̈ − 𝑚1 𝑙1 (𝐶1 𝑥̈ − 𝑆1 𝑥̇ 𝜃1̇ )
𝑑𝑡 𝜕𝜃̇1
𝜕𝓛
= 𝑆1 𝑚1 𝑙1 𝑚1 𝑥̇ 𝜃1̇ − 𝑚1 𝑔𝑙1 𝑆1
𝜕𝜃1
𝜕𝓛
= 𝑚2 𝑙2 2 𝜃2̇ − 𝑚2 𝑙2 𝐶2 𝑥̇
𝜕𝜃̇2
𝜕𝓛
= 𝑆2 𝑚2 𝑙2 𝑚2 𝑥̇ 𝜃2̇ − 𝑚2 𝑔𝑙2 𝑆2
𝜕𝜃2
𝑑 𝜕𝓛
( ) = 𝑚2 𝑙2 2 𝜃2̈ − 𝑚2 𝑙2 (𝐶2 𝑥̈ − 𝑆2 𝑥̇ 𝜃2̇ )
𝑑𝑡 𝜕𝜃̇2
𝑇ℎ𝑢𝑠 𝑤𝑒 𝑔𝑒𝑡,
𝑑 𝜕𝓛 𝜕𝓛
( )− = 𝑚1 𝑙1 2 𝜃1̈ − 𝑚1 𝑙1 (𝐶1 𝑥̈ − 𝑆1 𝑥̇ 𝜃1̇ ) − 𝑆1 𝑚1 𝑙1 𝑚1 𝑥̇ 𝜃1̇ + 𝑚1 𝑔𝑙1 𝑆1 = 0
̇
𝑑𝑡 𝜕𝜃1 ̇
𝜕𝜃1
𝑑 𝜕𝓛 𝜕𝓛
( )− = 𝑚1 𝑙1 2 𝜃1̈ − 𝑚1 𝑙1 𝐶1 𝑥̈ + 𝑚1 𝑔𝑙1 𝑆1
𝑑𝑡 𝜕𝜃̇1 ̇
𝜕𝜃1
Now we assume that for very small angle we take the following assumptions:
2 2
𝐶1 = 1, 𝑆1 = 𝜃1 , 𝜃1̇ = 0, 𝜃2̇ = 0
Similarly
𝑑 𝜕𝓛 𝜕𝓛
( )− = 𝑚2 𝑙2 2 𝜃2̈ − 𝑚2 𝑙2 (𝐶2 𝑥̈ − 𝑆2 𝑥̇ 𝜃2̇ ) − 𝑆2 𝑚2 𝑙2 𝑚2 𝑥̇ 𝜃2̇ + 𝑚2 𝑔𝑙2 𝑆2
𝑑𝑡 𝜕𝜃̇2 𝜕𝜃̇2
𝑑 𝜕𝓛 𝜕𝓛
( )− = 𝑚2 𝑙2 2 𝜃2̈ − 𝑚2 𝑙2 𝐶2 𝑥̈ + 𝑚2 𝑔𝑙2 𝑆2
𝑑𝑡 𝜕𝜃2̇ 𝜕𝜃2̇
2 2
𝐹 = 𝑥̈ (𝑚1 + 𝑚2 + 𝑀) − 𝑚1 𝑙1 (𝜃1 ̈ 𝐶1 − 𝑆1 𝜃1̇ ) − 𝑚2 𝑙2 (𝜃2 𝐶̈ 2 − 𝑆2 𝜃2̇ )
𝑚2 𝑙2 2 𝜃2̈ − 𝑚2 𝑙2 𝐶2 𝑥̈ + 𝑚2 𝑔𝑙2 𝑆2 = 0
𝐶1 𝑥̈ − 𝑔𝑆1
𝜃1̈ =
𝑙1
𝐶2 𝑥̈ − 𝑔𝑆2
𝜃2̈ =
𝑙2
Next, we substitute the obtained values of 𝜃1̈ , 𝜃2̈ into the equation of 𝑥̈ as follows:
2 2
𝐹 + 𝑚1 𝑙1 (𝜃1 𝐶̈ 1 − 𝑆1 𝜃1̇ ) + 𝑚2 𝑙2 (𝜃2 𝐶̈ 2 − 𝑆2 𝜃2̇ )
𝑥̈ =
(𝑚1 + 𝑚2 + 𝑀)
𝐶1 𝑥̈ − 𝑔𝑆1 2 𝐶 𝑥̈ − 𝑔𝑆2 2
𝐹 + 𝑚1 𝑙1 (𝐶1 ( ) − 𝑆1 𝜃1̇ ) + 𝑚2 𝑙2 (𝐶2 ( 2 ) − 𝑆2 𝜃2̇ )
𝑙1 𝑙2
𝑥̈ =
(𝑚1 + 𝑚2 + 𝑀)
2 2
𝐹 + 𝑚1 (𝐶1 2 𝑥̈ − 𝑔𝑆1 𝐶1 − 𝑙1 𝑆1 𝜃1̇ ) + 𝑚2 (𝐶2 2 𝑥̈ − 𝑔𝑆2 𝐶2 − 𝑙2 𝑆2 𝜃2̇ )
𝑥̈ =
(𝑚1 + 𝑚2 + 𝑀)
2
(𝑚1 + 𝑚2 + 𝑀)𝑥̈ − 𝑚1 𝐶1 2 𝑥̈ − 𝑚2 𝐶2 2 𝑥̈ = 𝐹 − 𝑚1 (𝑔𝑆1 𝐶1 + 𝑙1 𝑆1 𝜃1̇ ) −
2
𝑚2 (𝑔𝑆2 𝐶2 + 𝑙2 𝑆2 𝜃2̇ )
(𝑀 + 𝑚1 (1 − 𝐶1 2 ) + 𝑚2 (1 − 𝐶2 2 )) 𝑥̈
2 2
= 𝐹 − 𝑚1 (𝑔𝑆1 𝐶1 + 𝑙1 𝑆1 𝜃1̇ ) − 𝑚2 (𝑔𝑆2 𝐶2 + 𝑙2 𝑆2 𝜃2̇ )
𝟐 𝟐
𝑭 − 𝒎𝟏 (𝒈𝑺𝟏 𝑪𝟏 + 𝒍𝟏 𝑺𝟏 𝜽𝟏̇ ) − 𝒎𝟐 (𝒈𝑺𝟐 𝑪𝟐 + 𝒍𝟐 𝑺𝟐 𝜽𝟐̇ )
𝒙̈ =
(𝑴 + 𝒎𝟏 (𝑺𝟏 𝟐 ) + 𝒎𝟐 (𝑺𝟐 𝟐 ))
Next, we substitute the value of 𝑥̈ in the previously derived equations of 𝜃1̈ , 𝜃2̈ we get
𝟐 𝟐
𝑪𝟏 𝑭 − 𝒎𝟏 (𝒈𝑺𝟏 𝑪𝟏 + 𝒍𝟏 𝑺𝟏 𝜽𝟏̇ ) − 𝒎𝟐 (𝒈𝑺𝟐 𝑪𝟐 + 𝒍𝟐 𝑺𝟐 𝜽𝟐̇ ) 𝑺𝟏
𝜽𝟏̈ = [ ]−𝒈
𝒍𝟏 (𝑴 + 𝒎 (𝑺 𝟐 ) + 𝒎 (𝑺 𝟐 )) 𝒍𝟏
𝟏 𝟏 𝟐 𝟐
𝟐 𝟐
𝑪𝟐 𝑭 − 𝒎𝟏 (𝒈𝑺𝟏 𝑪𝟏 + 𝒍𝟏 𝑺𝟏 𝜽𝟏̇ ) − 𝒎𝟐 (𝒈𝑺𝟐 𝑪𝟐 + 𝒍𝟐 𝑺𝟐 𝜽𝟐̇ ) 𝑺𝟐
𝜽𝟐̈ = [ ]−𝒈
𝒍𝟐 (𝑴 + 𝒎 (𝑺 𝟐 ) + 𝒎 (𝑺 𝟐 )) 𝒍𝟐
𝟏 𝟏 𝟐 𝟐
Non-linear state space representation
Thus, the equations derived above are the non-linear equations of the system.
Thus, nonlinear state space representation of the above given system is:
𝑋̇ = 𝐴𝑋 + 𝐵𝑈
𝒙̇
𝟐 𝟐
𝑭 − 𝒎𝟏 (𝒈𝑺𝟏 𝑪𝟏 + 𝒍𝟏 𝑺𝟏 𝜽𝟏̇ ) − 𝒎𝟐 (𝒈𝑺𝟐 𝑪𝟐 + 𝒍𝟐 𝑺𝟐 𝜽𝟐̇ )
(𝑴 + 𝒎𝟏 (𝑺𝟏 𝟐 ) + 𝒎𝟐 (𝑺𝟐 𝟐 ))
𝒙̇
𝒙̈ 𝜽𝟏̇
𝜽𝟏 ̇ ̇ ) − 𝒎𝟐 (𝒈𝑺𝟐 𝑪𝟐 + 𝒍𝟐 𝑺𝟐 𝜽𝟐̇ 𝟐 )
𝟐
̇𝑿 = 𝑪 𝑭 − 𝒎 𝟏 (𝒈𝑺 𝟏 𝑪 𝟏 + 𝒍 𝟏 𝑺 𝟏 𝜽𝟏 𝑺𝟏
𝜽𝟏̈ =
𝟏
[ ]−𝒈
𝒍𝟏 (𝑴 + 𝒎𝟏 (𝑺𝟏 𝟐 ) + 𝒎𝟐 (𝑺𝟐 𝟐 )) 𝒍𝟏
𝜽𝟐̇
[𝜽𝟐̈ ] 𝜽𝟐̇
𝟐 𝟐
𝑪𝟐 𝑭 − 𝒎𝟏 (𝒈𝑺𝟏 𝑪𝟏 + 𝒍𝟏 𝑺𝟏 𝜽𝟏̇ ) − 𝒎𝟐 (𝒈𝑺𝟐 𝑪𝟐 + 𝒍𝟐 𝑺𝟐 𝜽𝟐̇ ) 𝑺𝟐
[ ]−𝒈
𝒍𝟐 (𝑴 + 𝒎 (𝑺 𝟐 ) + 𝒎 (𝑺 𝟐 )) 𝒍𝟐
[ 𝟏 𝟏 𝟐 𝟐 ]
Thus,
𝐹 − 𝑚1 𝑔𝜃1 − 𝑚2 𝑔𝜃2
𝑥̈ =
𝑀
𝐹 − 𝑚1 𝑔𝜃1 − 𝑚2 𝑔𝜃2
𝑀 𝑔𝜃1
𝜃1̈ = −
𝑙1 𝑙1
𝑚1 𝑔 𝑔 𝑚2 𝑔 𝐹
𝜃1̈ = − ( + ) 𝜃1 − ( ) 𝜃2 +
𝑀𝑙1 𝑙1 𝑀𝑙1 𝑀𝑙1
𝐹 − 𝑚1 𝑔𝜃1 − 𝑚2 𝑔𝜃2
𝑀 𝑔𝜃2
𝜃2̈ = −
𝑙2 𝑙2
𝑚2 𝑔 𝑔 𝑚1 𝑔 𝐹
𝜃2̈ = − ( + ) 𝜃2 − ( ) 𝜃2 +
𝑀𝑙2 𝑙2 𝑀𝑙2 𝑀𝑙2
The second method with which we can do linearization is Jacobian linearization.
A=
C) CONTROLLABILITY
Next, we obtain the conditions for which the system is controllable. The A and B matrices
obtained above are independent of time and thus the system is an LTI system. An LTI system
is controllable if the controllability matrix obtained has full rank condition. The dimensions of
the controllability matrix ‘C’ are n x nm, hence its rank should be equal to n. Hence,
𝑟𝑎𝑛𝑘(𝐶) = 𝑟𝑎𝑛𝑘[𝐵 𝐴𝐵 𝐴2 𝐵 𝐴3 𝐵 𝐴4 𝐵 𝐴5 𝐵] = 𝑛
C=
Where ,
For the above given controllability matrix to be of full rank, its determinant should not be
equal to be zero i.e. 𝑑𝑒𝑡(𝐶) ≠ 0 i.e.
The determinant of C matrix won’t be equal to zero only when 𝑙12 − 𝑙22 ≠ 0
Or, 𝑙1 ≠ 𝑙2
Thus, the given system is controllable only when the lengths of the cables of the crane isn’t
equal.
D) LQR CONTROLLER
We are given that 𝑀 = 1000 𝐾𝑔, 𝑚1 = 𝑚2 = 100 𝐾𝑔 𝑙1 = 20𝑚 𝑙2 = 10𝑚. Thus, we get
the values of A matrix as follows:
0 1 0 0 0 0
0 0 −0.9800 0 −0.9800 0
𝐴 =
0 0 0 1 0 0
0 0 −0.5390 0 −0.049 0
0 0 0 0 0 1
[0 0 −0.0980 0 −1.078 0]
The eigen values of A matrix before applying the LQR controller are:
0.0000 + 0.0000i
0.0000 + 0.0000i
0.0000 + 0.7282i
0.0000 - 0.7282i
0.0000 + 1.0425i
0.0000 - 1.0425i
We can see that all the eigenvalues are on the imaginary axis.
LQR controller theory:
Using LQR controller we try to bring the desired state to zero. System eigenvalue that make a
regulator work well are the same system eigenvalues that make a tracker work well. When you
give a reference that is not zero the system acts as a tracker that follows an external reference
input. However, we can think of the LQR controller as a regulator problem which has no
additional input that tries to bring the system states to zero.
In the system if the A and B matrices are controllable then we can put the eigenvalues of the
system anywhere we want. The problem in changing the position of the system eigenvalues is
that we don't know which eigenvalue will result in what kind of dynamical response.
The above process is not at all intuitive. We don’t know where to place the eigen values. Even
though we know that the system is controllable (i.e. we can place the eigen values wherever
we want) we don’t know exactly where to put them. In order to deal with this problem, we use
an LQR controller. It tries to make the choice of K a little more intuitive to the designer.
𝑋̇ = 𝐴𝑋 + 𝐵𝑈 where 𝑈 = −𝐾𝑋
Thus,
𝑋̇ = (𝐴 − 𝐵𝐾)𝑋
The LQR controller problem is sometimes also called an Infinite Horizon problem because we
integrate the cost function from zero to infinity. The reason for choosing Infinity is because
we want the system to work perfectly over all the time instants instead of just a specific interval
of time.
Our aim is to minimize the cost function given below, by adjusting the weights of the states
given in the Q matrix. Here the Q matrix belongs is multiplied with the states of the system,
and the R matrix is multiplied with the input of the system
∞
𝐽 = ∫ (𝑋⃗(𝑡)𝑇 𝑄𝑋⃗(𝑡) + (𝑈
⃗⃗(𝑡)𝑇 𝑅𝑈
⃗⃗(𝑡))𝑑𝑡
0
𝑋⃗(𝑡) = 𝑛 ∗ 1
𝑋⃗(𝑡)𝑇 = 1 ∗ 𝑛
𝑄 =𝑛∗𝑛
⃗⃗(𝑡) = 𝑝 ∗ 1
𝑈
⃗⃗(𝑡)𝑇 = 1 ∗ 𝑝
𝑈
𝑅 =𝑝∗𝑝
𝐽 = 1∗1
The name of the Controller is LQR because:
• L → The controller is applied to an approximated linear system as U = -KX
• Q → In the controller, the quadratic cost function J is minimized. The cost J also has a
minima if plotted on a graph
• R → The R stands for the regulatory behavior of the system as the controller tries to
bring the states to zero.
𝑋 𝜖 𝑅2
𝑞1 𝑞3 𝑋1
𝑄 = [𝑞 𝑞2 ] 𝑋 = [𝑋2 ]
3
𝑋 𝑇 𝑄𝑋 = 𝑞1 𝑋1 2 + 𝑞2 𝑋2 2 + 2𝑞3 𝑋1 𝑋2
Here Q1 penalizes the first state X1, Q2 penalizes the second state. And if Q is 3*3 then Q3
would penalize the third state. The diagonal elements penalize the state and the off-diagonal
elements penalize the combination of the states. Also, the values of the off-diagonal states need
to be lesser that that of the diagonal states.
We always want 𝑋⃗(𝑡)𝑇 𝑄𝑋⃗(𝑡) to be a positive number. If we get it as negative, it will minimize
the cost function J, but we won’t get the desired results, rather we will get some weird absurd
results. Thus 𝑿⃗⃗⃗(𝒕)𝑻 𝑸𝑿
⃗⃗⃗(𝒕) must be positive definite.
Hence P is given by
𝐴𝑇 𝑃 + 𝑃𝐴 − 𝑃𝐵𝑅 −1 𝐵𝑇 𝑃 + 𝑄 = 0
Where,
𝐾 = 𝑅 −1 𝐵 𝑇 𝑃
𝑎𝑙𝑠𝑜, 𝑈 = −𝐾𝑋
Upon substituting the value of K obtained in (A-BK), we get a stabilizable controller that has
proper system eigen values, which also minimizes the cost J calculated earlier.
-35.4675 + 0.0000i
-0.4311 + 0.0000i
-0.1798 + 0.3577i
-0.1798 - 0.3577i
-0.1269 + 0.7769i
-0.1269 - 0.7769i
According to the lecture notes, Lyapunov’s indirect method states that we first linearize the
original system around the equilibrium point of interest and then check the eigen values as
shown above. Our eigen values of A matrix have negative real part, and hence the original
system is at least locally stable, around equilibrium point. In this case, a Lyapunov function for
the linearized system will be valid at least locally.
The initial states of the system were taken to be 10 degrees for mass 1 and 5 degrees for mass
2.
Fig: Position of cart after applying LQR controller
From the above graphs we can clearly see that the system is stabilized within 40s.
We can see from the above graphs that the nonlinear system is also getting stabilized after applying
the LQR controller to it.
SECOND COMPONENT:
E) Observable Vectors
Here we have four output vectors: x(t), (𝜃1 (t); 𝜃2 (t)), (x(t); 𝜃2 (t)) and (x(t); 𝜃1 (t); 𝜃2 (t)).
From the code, we get the rank for respective output vector as 6,4,6,6. So the output vector
(θ1 (t); θ2 (t)) is not observable.
F) Observer Design
The Luenberger Observer is written in state-space representation as:
Here, 𝑥̂(𝑡) is state estimator, L is observer gain matrix, 𝑌(𝑡) − 𝐶𝑥̂(𝑡) is correction term and
𝑥̂(0) = 0. The estimation error 𝑋𝑒 (𝑡) = 𝑋(𝑡) − 𝑋̂(𝑡) has the following state space
representation:
G) LQG Design
The LQG controller is blend of LQR controller and the Kalman filter. On substituting Q = 1,
R = 0.1, noise Bd = 0.1 and Vd = 0.01, we get the following response curve of the output. The
code for the following response is in appendix-G.
Fig: Non-Linear LQG response for output vector: x(t)
We will reconfigure our controller by providing a desired x vector and tune the LQR controller
accordingly to get the feedback according to the desired output. Yes, our design can reject
constant force disturbances applied on the cart. If you increase the disturbance noise in the
Kalman filter, you will notice that the LQR controller is strong enough to stabilize the x(t) cart
position within few seconds.
APPENDIX-F
Ac1 = A-(Lue1*c1);
Ac3 = A-(Lue3*c3);
Ac4 = A-(Lue4*c4);
e_sys1 = ss(Ac1,[B Lue1],c1,0);
e_sys3 = ss(Ac3,[B Lue3],c3,0);
e_sys4 = ss(Ac4,[B Lue4],c4,0);
% step(e_sys1)
% step(e_sys3)
% step(e_sys4)
[y3,t] = lsim(sys3,unitStep,tspan);
[x3,t] = lsim(e_sys3,[unitStep;y3'],tspan);
[y4,t] = lsim(sys4,unitStep,tspan);
[x4,t] = lsim(e_sys4,[unitStep;y4'],tspan);
figure();
hold on
plot(t,y1(:,1),'r','Linewidth',2)
plot(t,x1(:,1),'k--','Linewidth',1)
ylabel('State Variables')
xlabel('time(sec)')
legend('x(t)','Estimated x(t)')
title('Response for output vector at step input: (x(t)')
hold off
figure();
hold on
plot(t,y3(:,1),'r','Linewidth',2)
plot(t,y3(:,3),'b','Linewidth',2)
plot(t,x3(:,1),'k--','Linewidth',1)
plot(t,x3(:,3),'m--','Linewidth',1)
ylabel('State Variables')
xlabel('time(sec)')
legend('x(t)','theta_2(t)','Estimated x(t)','Estimated theta_2(t)')
title('Response for output vector at step input: (x(t),theta_2(t))')
hold off
figure();
hold on
plot(t,y4(:,1),'r','Linewidth',2)
plot(t,y4(:,2),'g','Linewidth',2)
plot(t,y4(:,3),'b','Linewidth',2)
plot(t,x4(:,1),'k--','Linewidth',1)
plot(t,x4(:,2),'r--','Linewidth',1)
plot(t,x4(:,3),'m--','Linewidth',1)
ylabel('State Variables')
xlabel('time(sec)')
legend('x(t)','theta_1(t)','theta_2(t)','Estimated x(t)','Estimated
theta_1(t)','Estimated theta_2(t)')
title('Response for output vector at step input:
(x(t),theta_1(t),theta_2(t))')
hold off
%% Linear Model Observer Response
[t,q1] = ode45(@(t,q)linearObs1(t,q,Lue1),tspan,q0);
figure();
hold on
plot(t,q1(:,1))
ylabel('state variables')
xlabel('time (sec)')
title('Linear system Observer for output vector: x(t)')
legend('x')
hold off
[t,q3] = ode45(@(t,q)linearObs3(t,q,Lue3),tspan,q0);
figure();
hold on
plot(t,q3(:,1))
plot(t,q3(:,5))
ylabel('state variables')
xlabel('time (sec)')
title('Linear system Observer for output vector: (x(t),theta_2(t))')
legend('x','theta_2')
hold off
[t,q4] = ode45(@(t,q)linearObs4(t,q,Lue4),tspan,q0);
figure();
hold on
plot(t,q4(:,1))
plot(t,q4(:,3))
plot(t,q4(:,5))
ylabel('state variables')
xlabel('time (sec)')
title('Linear system Observer for output vector:
(x(t),theta_1(t),theta_2(t))')
legend('x','theta_1','theta_2')
hold off
%% Non-linear Model Observer Response
[t,q1] = ode45(@(t,q)nonLinearObs1(t,q,1,Lue1),tspan,q0);
figure();
hold on
plot(t,q1(:,1))
ylabel('state variables')
xlabel('time (sec)')
title('Non-Linear System Observer for output vector: x(t)')
legend('x')
hold off
[t,q3] = ode45(@(t,q)nonLinearObs3(t,q,1,Lue3),tspan,q0);
figure();
hold on
plot(t,q3(:,1))
plot(t,q3(:,5))
ylabel('state variables')
xlabel('time (sec)')
title('Non-Linear System Observer for output vector: (x(t),theta_2(t))')
legend('x','theta_2')
hold off
[t,q4] = ode45(@(t,q)nonLinearObs4(t,q,1,Lue4),tspan,q0);
figure();
hold on
plot(t,q4(:,1))
plot(t,q4(:,3))
plot(t,q4(:,5))
ylabel('state variables')
xlabel('time (sec)')
title('Non-Linear System Observer for output vector:
(x(t),theta_1(t),theta_2(t))')
legend('x','theta_1','theta_2')
hold off
APPENDIX-G
clear all
%% Defining variables
syms m1 g m2 M L1 L2 x dx
m1 = 100;
m2 = 100;
M = 1000;
L1 = 20;
L2 = 10;
g = 9.81;
tspan = 0:0.1:100;
% q = [x dx t1 dt1 t2 dt2];
%Enter initial conditions
q0 = [2 0 deg2rad(0) 0 deg2rad(0) 0];
%% Linearized Model
A = [0 1 0 0 0 0; 0 0 -m1*g/M 0 -m2*g/M 0; 0 0 0 1 0 0; 0 0 -
((M*g)+(m1*g))/(M*L1) 0 -g*m2/(M*L1) 0; 0 0 0 0 0 1; 0 0 -m1*g/(M*L2) 0 -
((M*g)+(m2*g))/(M*L2) 0];
B = [0; 1/M; 0; 1/(L1*M); 0; 1/(L2*M)];
c1 = [1 0 0 0 0 0; 0 0 0 0 0 0; 0 0 0 0 0 0];
d = [1;0;0];
sys1 = ss(A,B,c1,d);
%% LQR Controller
Q = [1 0 0 0 0 0; 0 0 0 0 0 0; 0 0 0 0 0 0; 0 0 0 0 0 0; 0 0 0 0 0 0; 0 0 0
0 0 0];
R = 0.1;
[K,S,P] = lqr(A,B,Q,R);
sys = ss(A-B*K,B,c1,d);
% step(sys,200);