0% found this document useful (0 votes)
78 views

16.323 Principles of Optimal Control: Mit Opencourseware

This document is a homework assignment for the MIT course 16.323 Principles of Optimal Control. It contains 5 problems related to optimal control theory and numerical methods for solving optimal control problems. The problems cover topics like deriving optimal controllers with terminal constraints, properties of the Hamiltonian, numerical trajectory optimization methods, linear quadratic regulation for reference tracking, and optimizing thrust vector direction for a space launch vehicle.

Uploaded by

Santosh Kiran
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

16.323 Principles of Optimal Control: Mit Opencourseware

This document is a homework assignment for the MIT course 16.323 Principles of Optimal Control. It contains 5 problems related to optimal control theory and numerical methods for solving optimal control problems. The problems cover topics like deriving optimal controllers with terminal constraints, properties of the Hamiltonian, numerical trajectory optimization methods, linear quadratic regulation for reference tracking, and optimizing thrust vector direction for a space launch vehicle.

Uploaded by

Santosh Kiran
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

MIT OpenCourseWare http://ocw.mit.

edu

16.323 Principles of Optimal Control


Spring 2008

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

16.323 Prof. J. P. How

Handout #4 March 20, 2007 Due: April 10, 2007

16.323 Homework Assignment #4


1. The derivation on pages 61 62 was done for the case of free or xed x(tf ) and then repeated on 63 for more general boundary condition m(x(tf ), tf ) = 0. (a) Use the result on 63 to derive an optimal controller for the same case as Example 61 (on page 65) but with terminal constraints that y (tf ) = 0 and y 2 (tf ) + (tf 5)2 4 = 0 (b) Use fsolve (as on 518) to solve for the unknown parameters (there should be 5) in the control problem, and then use a simulation of the system to conrm that the control inputs cause the system to reach the target set at the appropriate tf . (i) Consider two cases: (a) = 0.1, b = 1; and (b) = 100, b = 1. 2. Show that if in the Hamiltonian H , a and g are independent of time t, then H is a constant, and that if tf is free and h is not an explicit function of time, then this constant is zero. This is an important result that can often be used to help solve the control problems. 3. Read the posted article by Betts Survey of numerical methods for. trajectory optimization, AIAA J. of Guidance,. Control and Dynamics, 21:193-207, 1998, and write a 1 page summary of his suggestions/conclusions. 4. We discussed in class that LQR is a great regulator in that it quickly returns the system states to 0 while balancing the amount of control used. However, we are also interested in tracking a reference command, so that y (t) = r(t) as t . (a) Design a steady state LQR controller for the system using Rxx = I2 , Ruu = 0.01

x =

1 1 1 2

x +

1 0

y=

1 0 x

A naive way to implement a reference tracker is to modify the LQR controller from u = Kx to u = r Kx: x = (A BK )x + Br , y = Cx Verify that this leads to particularly poor tracking of a step input! 1

(b) An alternative strategy is to use u = N r Kx, where N is a constant. What is a good way to choose N to ensure zero steady state error for this closed-loop system? What are the consequences of this change in the step response of the closed-loop system? (c) A completely dierent approach to ensuring zero steady-state error is to use what is often called an LQ-servo. The approach is to add a new state to the system that integrates the tracking error: x i = r y = r Cx, giving:

x x i

= y =

C 0 C 0

x xi

B 0

u+

0 1

x xi

The LQR problem statement can now be modied (ignore r in the design of u) to place a high weighting on xi to penalize the tracking error. Use this technique to design a new controller (keep Rxx and Ruu the same as part(a) and tune the weight on xi to achieve a performance that is similar to part (b)). Compare the transient responses for the approaches in (b) and (c) - do you see any advantages to one approach over the other? 5. Consider the optimal control of thrust vector direction that maximizes the terminal velocity of the space launch vehicle in the rectilinear frame. The vehicle dynamics (see gure) is represented as u v x y = = = = a cos a sin u v

where (x, y ) and (u, v ) denote the position and the velocity of the rocket with respect to an rectilinear inertial frame. The thrust acceleration a is assumed to be a known function of time. The boundary conditions are given as: u(t0 ) = v (t0 ) = x(t0 ) = y (t0 ) = 0, v (tf ) = 0, y (tf ) = h. Also, the terminal time tf is given and h is the altitude of the target orbit. (a) Show that the optimal control law takes the form of tan = tan 0 c t where 0 and c are constant. This law is referred to as linear tangent law. (b) What happens if there exists the gravity described as: v = a sin g with g being constant? 2 (2) (1)

y a t=T

u (T)

h x

t=0

Figure by MIT OpenCourseWare.

Figure 1: Figure for Problem 4

(c) Use the numerical methods discussed in class to nd the optimal control inputs and resulting path when g = 0, h = 5000m, a = 2m/s, and tf = 200sec. Compare the numerical and analytic solutions.

You might also like