Solar Panel
Solar Panel
Remember from Chapter 4 that constrained optimization problem is expressed mathematically as:
Let us consider a dynamic system, described by the state vector x(t) , and governed by the ODE
system
At time t0, we start at the initial state x(t0) = x[0], and wish to determine the trajectory of control
inputs u(t) within t ∈ [t0, t1] that minimize an integral cost functional
To solve our basic optimal control problem, a set of what is called necessary conditions
must be satisfied. Pontryagin introduced the adjoint function to affix the differential equation to
the objective functional. These functions serve a similar purpose as the Lagrange multipliers.
The necessary conditions needed to solve the basic problem are derived from what is referred to
as the Hamiltonian, H,
The Maximum Principle says that there is an adjoint function (also called co-state variable) λ(t),
such that an optimal state x(t), and optimal control u(t) must necessarily:
1
The derivation of these results can be found in [S. Lenhart and J. T. Workman, Optimal
Control Applied to Biological Models, Boca Raton, FL, Florida: Taylor & Francis Group, LLC,
2007.]
These conditions can be solved explicitly sometimes; however, for most problems, the conditions
are too complicated to be solved explicitly. This is especially true for problems that also involve
additional constraints on the state or the control. Because of these, numerical approaches are used
to construct approximations to these difficult equations. For this, the Runge-Kutta algorithm will
be used to solve such problems.
The steps to solve optimal control problems based on Pontryagin’s Maximum Principle are
summarized as follows:
Activity 6
Using Pontryagin’s Maximum Principle, the state equation, adjoint function and transversality
boundary condition and the optimality condition are derived as follows:
Optimality condition
Step 4: Solve the state and adjoint and state differential equations with their boundary
conditions
6.1. Show that the actual solutions for the state and adjoint are given by:
6.2. We will now solve the optimal control problem using numerical approximations.
One of the methods to solve this problem is the Forward Backward Sweep (FBS). This
iterative method is named based on how the algorithm solves the problem’s state and adjoint
ODE’s. Given an approximation of the control function, FBS first solves the state ‘forward’ in
time (from t0 to t1) then solves the adjoint backwards (from t1 to t0). Once the state and the
adjoint are determined, the control (u) is determined from the optimality condition.
Forward Runge-Kutta-4 algorithm for three inputs is given as:
function x = forward_runge_kutta_4(t,x,u,N,ode)
% 4th order classic Runge-Kutta Method for solving
% the state equation (optimal control):
% x' = f(t,x,u), x(0) = x0 (can be vector valued)
% constant step
h=1/N;
h2=h/2;
for i=1:N
K1 = ode(t(i),x(:,i),u(:,i));
K2 = ode(t(i)+h2,x(:,i)+h2*K1,(u(:,i)+u(:,i+1))/2);
K3 = ode(t(i)+h2,x(:,i)+h2*K2,(u(:,i)+u(:,i+1))/2);
K4 = ode(t(i)+h,x(:,i)+h*K3,u(:,i+1));
x(i+1)=x(i)+(h/6)*(K1+2*K2+2*K3+K4);
end
% vector lam(N+1) = lamT (on input) (row vector or row by number of dim)
% vector x contains state function values (row vector or row by dim)
% vector u contains control values (row vector or row by dimension)
% vector t contains time values (constant step 1/N)
% constant step
h=1/N;
h2=h/2;
for j=1:N
i = N+2-j;
K1 = ode(t(i),x(:,i),lambda(:,i),u(:,i));
K2 = ode(t(i)-h2,(x(:,i)+x(:,i-1))/2,lambda(:,i)-h2*K1,(u(:,i)+u(:,i-
1))/2);
K3 = ode(t(i)-h2,(x(:,i)+x(:,i-1))/2,lambda(:,i)-h2*K2,(u(:,i)+u(:,i-
1))/2);
K4 = ode(t(i)-h,x(:,i-1),lambda(:,i)-h*K3,u(:,i-1));
lambda(i-1)=lambda(i)-(h/6)*(K1+2*K2+2*K3+K4);
end
The following Matlab function F_B_Sweep.m uses the above two functions to find the state (x), the
adjoint (lambda) and the control (u). The stopping criteria is determined by finding the relative errors for
the state, the co-state, and the control and requiring that all three be less than a specified value.
function [x,lambda,u,k]=F_B_Sweep(x0,t,h,h2,u_func,state,adjoint,control,N)
%Forward_Backward_Sweep(x0,t,h,h2,u_func,state,adjoint,control,N) solves an
%optimal control problem using the forward backward sweep. x0 is the
%initial value for the state ODE. t,h,h2 are the time interval broken into
%intervals with length h. h2 is half the mesh interval size. u_func is how
%the FBS computes the control for each step. state,adjoitn, and control are
%the equations related to the optimal control problem. N is the number of
%mesh points in the interval.
%error bound
delta = 0.001;
%Saved old data for error computation to see if the stop criteria have
%been met.
oldu = u;
oldx = x;
oldlambda = lambda;
%Iteration set
k=k+1;
end
end
Finally, the following Matlab Script uses F_B_Sweep.m to calculate the state, adjoint and control
and plot the results.
%Mesh size
N=1000;
%The integrand for the control problem
f = @(X,U) -.5*(X.^2+U.^2);
%initial state at time 0
x0 = 1;
%The State, Adjoint, and Control functions needed for both processes
state = @(T,X,U)U-X;
adjoint = @(T,X,L,U)L-X;
control = @(T,X,L)-(L);
state_adjoint = @(T,X,U)[state(T,X(1),U);adjoint(T,X(1),X(2),U)];
[x_gen,lambda_gen,u_gen,k_gen] =
F_B_Sweep(x0,t,h,h2,u_func,state,adjoint,control,N);
%graph of data
% State
figure
subplot(3,1,1)
plot(t,x_gen)
xlabel('Time')
ylabel('State')
% Control
subplot(3,1,2)
plot(t,u_gen)
xlabel('Time')
ylabel('Control')
%Adjoint
subplot(3,1,3)
plot(t,lambda_gen)
xlabel('Time')
ylabel('Adjoint')