0% found this document useful (0 votes)
3 views61 pages

Final Lab Record

The document outlines a laboratory record for a Robotics and Control course at Amrita School of Engineering, detailing various optimization experiments conducted using MATLAB. It includes a structured index of experiments focusing on unconstrained and constrained minimization techniques, along with specific functions and procedures for each experiment. Each experiment concludes with results and visualizations, demonstrating the application of MATLAB's optimization tools in solving real-world problems.

Uploaded by

mahilash16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views61 pages

Final Lab Record

The document outlines a laboratory record for a Robotics and Control course at Amrita School of Engineering, detailing various optimization experiments conducted using MATLAB. It includes a structured index of experiments focusing on unconstrained and constrained minimization techniques, along with specific functions and procedures for each experiment. Each experiment concludes with results and visualizations, demonstrating the application of MATLAB's optimization tools in solving real-world problems.

Uploaded by

mahilash16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

DEPARTMENT OF MECHANICAL ENGINEERING

21ARE304 Robotics and Control


Laboratory Record

Fifth Semester
B Tech. Automation and Robotics Engineering

Mahilash S

AM.EN.U4ARE22045

AMRITA SCHOOL OF ENGINEERING


AMRITA VISHWA VIDYAPEETHAM
AMRITAPURI CAMPUS, CLAPPANA P.O.
KERALA, INDIA 690 525
DEPARTMENT OF MECHANICAL ENGINEERING
AMRITA SCHOOL OF ENGINEERING
AMRITA VISHWA VIDYAPEETHAM
AMRITAPURI CAMPUS

Certify that, this is the bonafide record of work done by


Mr/Ms (Reg.No: )
Branch: , in the Optimization Techniques
Laboratory of this Institution, as prescribed by the AMRITA VISHWA
VIDYAPEETHAM, UNIVERSITY for the Sixth semester B.Tech Degree,
during the year 2024-2025.

Date………………… Staff in charge Chairperson


Amritapuri Robotics and Control Mechanical Department
INDEX
Page Grade Signature
Sl.No Date Experiment
No.
To perform unconstrained minimization
1 using MATLAB’s fminunc function. 1

2 To solve a minimization problem using 4


MATLAB’s derivative-free fminsearch
function.

3 To solve a constrained nonlinear 7


optimization problem using MATLAB’s
fmincon function.

4 To perform constrained minimization 10


with bound limits using MATLAB’s
fmincon.

5 To handle nonlinear inequality 13


constraints in an optimization problem
using fmincon.

6 To solve a nonlinear constrained 16


problem by providing gradients using
fmincon.

7 To minimize the Rosenbrock (banana) 20


function using MATLAB optimization
routines.

8 To minimize a computationally 23
expensive function using parallel
computing in MATLAB.

9 To solve a high-dimensional nonlinear 26


optimization problem using fminunc.

10 To solve a linear programming problem 28


using MATLAB’s linprog function.
11 To optimize a long-term investment 30
strategy using solver-based linear
programming.

12 To solve a mixed-integer linear 32


programming problem using MATLAB’s
intlinprog function.

13 To optimize a supply chain using mixed- 34


integer linear programming in
MATLAB.

14 To solve a Sudoku puzzle using integer 37


programming and MATLAB’s intlinprog.

15 To solve a quadratic programming 39


problem with bounds using MATLAB’s
quadprog.

16 To perform portfolio optimization using 41


quadratic programming in MATLAB

17 To perform multiobjective optimization 44


using MATLAB’s fgoalattain function.

18 To generate and visualize the Pareto 47


front using MATLAB’s gamultiobj
function.

19 To solve a linear optimization problem 49


using MATLAB’s problem-based
approach.

20 To solve a nonlinear optimization 51


problem using MATLAB’s problem-
based optimization approach.
EXPERIMENT 1
AIM: Unconstrained Minimization Using fminunc

Minimize the function

f(x, y) = x² + y² + xy – 10x – 4y

This is a smooth, unconstrained quadratic function. The goal is to find the values of x and y
that minimize this expression using MATLAB’s fminunc solver.

APPARATUS

●​ · Computer
●​ · MATLAB Software

PROCEDURE

1.​ Define an objective function as a separate file or inline anonymous function.​

2.​ Choose an initial guess for the variables.​

3.​ Use the fminunc solver to minimize the function.​

4.​ Set options for optimization if needed, such as gradient specification.​

5.​ Run the function and interpret the result.

CODE

% Define the objective function

fun = @(x) x(1)^2 + x(2)^2 + x(1)*x(2) - 10*x(1) - 4*x(2);

x0 = [0,0]; % Choose an initial guess

options = optimoptions('fminunc','Display','iter','Algorithm','quasi-newton'); % Call fminunc

[x,fval] = fminunc(fun,x0,options);

1
disp('The minimum value is at:'); % Display results

disp(x);

disp('The minimum function value is:');

disp(fval);

% Plotting the objective function

[x1, x2] = meshgrid(-10:0.5:10, -10:0.5:10);

z = x1.^2 + x2.^2 + x1.*x2 - 10*x1 - 4*x2;

figure;

surf(x1, x2, z)

hold on

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10, 'LineWidth', 2)

title('Surface Plot of Objective Function with Minimum Point')

xlabel('x1')

ylabel('x2')

zlabel('Objective Function Value')

grid on

2
RESULT

Fig 1 – Surface plot of the objective function showing the minimum point identified by fminunc.

CONCLUSION

The experiment successfully demonstrated the use of MATLAB’s fminunc function to find
the local minimum of a nonlinear, unconstrained function using the quasi-Newton
optimization algorithm. By visualizing the function surface and its contours, we gained
insight into the topography of the objective function. The solver efficiently converged to a
solution, starting from an initial guess, and provided the optimal point along with the
minimum function value. This experiment highlights the power and ease of MATLAB's
optimization toolbox for solving real-world nonlinear minimization problems without
constraints.

3
EXPERIMENT 2
AIM: Unconstrained Minimization Using fminsearch

Minimize the function

f(x, y) = x² + y² + 4xy – 12x – 8y

This is an unconstrained nonlinear function. The objective is to locate the minimum point
using the derivative-free simplex search method provided by MATLAB’s fminsearch.

APPARATUS:

●​ · ​ Computer
●​ · ​ MATLAB software

PROCEDURE:

1.​ Define the objective function as an anonymous function in MATLAB.​

2.​ Choose an initial guess for the variables.​

3.​ Use the fminsearch function to perform the minimization.​

4.​ Display the result.​

5.​ Plot the surface of the objective function and mark the minimum point found.

CODE:

% Define the objective function

fun = @(x) x(1)^2 + x(2)^2 + 4*x(1)*x(2) - 12*x(1) - 8*x(2);

% Initial guess

x0 = [0, 0];

% Call fminsearch

[x, fval] = fminsearch(fun, x0);

4
% Display the results

disp('Minimum point:');

disp(x);

disp('Minimum value:');

disp(fval);

% Plotting the function

[x1, x2] = meshgrid(-10:0.5:10, -10:0.5:10);

z = x1.^2 + x2.^2 + 4*x1.*x2 - 12*x1 - 8*x2;

figure;

surf(x1, x2, z)

hold on

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10, 'LineWidth', 2)

title('Surface Plot of Objective Function with Minimum Point')

xlabel('x1')

ylabel('x2')

zlabel('Objective Function Value')

grid on

5
RESULT:

Fig 2: Surface plot showing the function minimized using fminsearch, with the minimum point highlighted.

CONCLUSION:

The experiment successfully used MATLAB’s fminsearch function to find the minimum of an
unconstrained function. Since fminsearch does not require gradients, it is ideal for optimizing
smooth functions when derivatives are unavailable or difficult to compute.

6
EXPERIMENT 3
AIM: Constrained Minimization Using fmincon

Minimize the function​


f(x, y) = x² + y²​
subject to the constraints​
x + y ≥ 1 and x ≥ 0, y ≥ 0​
This experiment demonstrates how to solve a constrained optimization problem using
MATLAB’s fmincon solver.

APPARATUS:

●​ · ​ Computer
●​ · ​ MATLAB software

PROCEDURE:
1.​ Define the objective function as an anonymous function.​

2.​ Specify inequality constraints in matrix form: A·x ≤ b.​

3.​ Provide lower bounds for variables.​

4.​ Use fmincon to solve the problem with the defined constraints.​

5.​ Plot the feasible region and mark the solution point.

CODE:

% Define the objective function

fun = @(x) x(1)^2 + x(2)^2;

% Initial guess

x0 = [0.5, 0.5];

% Inequality constraints: A*x <= b means -x - y <= -1 (i.e., x + y >= 1)

A = [-1, -1];

b = -1;

7
% Lower bounds

lb = [0, 0];

% Call fmincon

options = optimoptions('fmincon','Display','iter','Algorithm','sqp');

[x, fval] = fmincon(fun, x0, A, b, [], [], lb, [], [], options);

% Display results

disp('Optimal point:');

disp(x);

disp('Minimum value:');

disp(fval);

% Plot the function and constraints

[x1, x2] = meshgrid(0:0.1:2, 0:0.1:2);

z = x1.^2 + x2.^2;

figure;

surf(x1, x2, z, 'EdgeColor', 'none')

hold on

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10, 'LineWidth', 2)

title('Constrained Optimization with fmincon')

xlabel('x')

ylabel('y')

zlabel('Objective Function')

grid on

8
RESULT:

Fig 3.1: Surface plot of the objective function under linear constraints using fmincon, showing the optimal
solution.

CONCLUSION:

This experiment illustrates how fmincon can be used to solve a simple constrained
optimization problem. The function minimized subject to bounds and inequality constraints
showcases fmincon's ability to handle real-world conditions.

9
EXPERIMENT 4
AIM: Minimization with Bound Constraints

Minimize the function

f(x) = (x – 2)² + exp(x)

subject to the bound constraint

–1 ≤ x ≤ 2

This experiment demonstrates how to solve a bound-constrained minimization problem using


MATLAB’s fminbnd function.

APPARATUS:

●​ · Computer
●​ · MATLAB Software

PROCEDURE:

1.​ Define the scalar objective function as an anonymous function.​

2.​ Set the lower and upper bounds for the decision variable.​

3.​ Use fminbnd to perform the bound-constrained minimization.​

4.​ Display the optimal point and corresponding function value.​

5.​ Plot the function over the interval and mark the minimum.

CODE:

% Define the objective function

fun = @(x) (x - 2)^2 + exp(x);

% Bounds

lb = -1;

ub = 2;

10
% Use fminbnd to find the minimum

[x, fval] = fminbnd(fun, lb, ub);

% Display results

disp('Minimum occurs at x =');

disp(x);

disp('Minimum function value =');

disp(fval);

% Plot the function

fplot(fun, [lb, ub])

hold on

plot(x, fval, 'r*', 'MarkerSize', 10, 'LineWidth', 2)

title('Bound-Constrained Minimization Using fminbnd')

xlabel('x')

ylabel('f(x)')

grid on

11
RESULT:

Fig 4 – Plot of the function within bounds, showing the minimum point identified using fminbnd.

CONCLUSION:

The experiment demonstrates how MATLAB's fminbnd function can effectively find the
minimum of a single-variable function within given bounds. This method is efficient for
bounded scalar problems where no gradients are needed.

12
EXPERIMENT 5
AIM: Minimization with Nonlinear Inequality Constraints

Minimize the function

f(x, y) = x² + y²

subject to the nonlinear constraint

(x – 1)² + y² ≤ 1

This experiment demonstrates how to solve a problem with nonlinear inequality constraints
using MATLAB’s fmincon.

APPARATUS:

●​ · Computer
●​ · MATLAB software

PROCEDURE:

1.​ Define the objective function as an anonymous function.​

2.​ Define the nonlinear constraint function separately.​

3.​ Choose an initial guess that satisfies or is near the constraint boundary.​

4.​ Use fmincon with appropriate options to solve the problem.​

5.​ Visualize the constraint region and the solution.

CODE:

% Objective function

fun = @(x) x(1)^2 + x(2)^2;

% Initial guess

x0 = [0.5, 0.5];

13
% Nonlinear constraint function

nonlcon = @(x) deal((x(1)-1)^2 + x(2)^2 - 1, []); % c(x) <= 0

% Call fmincon

options = optimoptions('fmincon','Display','iter','Algorithm','sqp');

[x, fval] = fmincon(fun, x0, [], [], [], [], [], [], nonlcon, options);

% Display results

disp('Optimal point:');

disp(x);

disp('Minimum value:');

disp(fval);

% Plot the feasible region and solution

[x1, x2] = meshgrid(-0.5:0.01:2, -1.2:0.01:1.2);

C = ((x1 - 1).^2 + x2.^2 <= 1);

Z = x1.^2 + x2.^2;

figure;

surf(x1, x2, Z, 'EdgeColor', 'none')

hold on

contour3(x1, x2, double(C), [1 1], 'k', 'LineWidth', 2)

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10)

title('Minimization with Nonlinear Inequality Constraint')

xlabel('x')

ylabel('y')

zlabel('f(x,y)')

grid on

14
RESULT:

Fig 5 - 3D plot of the objective function with constraint boundary and optimal point marked.

CONCLUSION:

This experiment shows how fmincon handles nonlinear inequality constraints. The constraint defines
a circular feasible region, and the optimizer successfully finds the minimum point within that region.

15
EXPERIMENT 6
AIM: Nonlinear Constraints with Gradients

Minimize the function

f(x, y) = x² + y²

subject to the constraint

x² – y ≥ 0

with user-supplied gradients for both objective and constraint.

This experiment shows how providing analytical gradients can improve optimization
efficiency using fmincon.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define the objective function and its gradient.​

2.​ Define the nonlinear constraint and its gradient.​

3.​ Supply these to fmincon using the 'SpecifyObjectiveGradient' and


'SpecifyConstraintGradient' options.​

4.​ Choose an appropriate initial point.​

5.​ Run the optimization and visualize the result.

CODE:

% Main script

% Initial point

x0 = [1; 1];

16
% Set options to specify gradients

options = optimoptions('fmincon', ...

'Algorithm', 'sqp', ...

'SpecifyObjectiveGradient', true, ...

'SpecifyConstraintGradient', true, ...

'Display', 'iter');

% Call fmincon

[x, fval] = fmincon(@objfun, x0, [], [], [], [], [], [], @nonlcon, options);

% Display result

disp('Minimum point:');

disp(x);

disp('Minimum value:');

disp(fval);

% Plotting

[x1, x2] = meshgrid(-1:0.1:2, -1:0.1:3);

z = x1.^2 + x2.^2;

constraint = x1.^2 - x2;

figure;

surf(x1, x2, z, 'EdgeColor', 'none')

hold on

contour3(x1, x2, constraint, [0 0], 'k', 'LineWidth', 2)

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10)

title('Minimization with User-Supplied Gradients')

17
xlabel('x')

ylabel('y')

zlabel('Objective Function')

grid on

% Objective function and gradient

function [f, gradf] = objfun(x)

f = x(1)^2 + x(2)^2;

gradf = [2*x(1); 2*x(2)];

end

% Nonlinear constraint and gradient

function [c, ceq, gradc, gradceq] = nonlcon(x)

c = x(1)^2 - x(2); % Inequality constraint: c(x) <= 0

ceq = []; % No equality constraint

gradc = [2*x(1); -1]; % Gradient of c(x)

gradceq = []; % No gradient for equality

end

18
RESULT:

Fig 6 – Surface plot of objective function with constraint curve and optimal point using gradients.

CONCLUSION:

This experiment shows that using gradient information in fmincon enhances performance and
precision. It is especially useful when dealing with complex problems or when faster
convergence is desired.

19
EXPERIMENT 7
AIM: Banana Function Minimization

Minimize the Rosenbrock (banana) function:

f(x, y) = 100(y – x²)² + (1 – x)²

This non-convex function is shaped like a banana valley and is often used to test the
performance of optimization algorithms. We'll use fminunc to find the minimum.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define the Rosenbrock function as an anonymous function.​

2.​ Provide a suitable initial guess.​

3.​ Use fminunc to minimize the function.​

4.​ Optionally supply gradient information to improve convergence.​

5.​ Visualize the function surface and the optimal point.

CODE:

% Define Rosenbrock function

rosen = @(x) 100*(x(2) - x(1)^2)^2 + (1 - x(1))^2;

% Initial guess

x0 = [-1.2, 1];

% Use fminunc

20
options = optimoptions('fminunc','Display','iter','Algorithm','quasi-newton');

[x, fval] = fminunc(rosen, x0, options);

% Display result

disp('Minimum point:');

disp(x);

disp('Minimum value:');

disp(fval);

% Plotting

[x1, x2] = meshgrid(-2:0.05:2, -1:0.05:3);

z = 100*(x2 - x1.^2).^2 + (1 - x1).^2;

figure;

surf(x1, x2, z, 'EdgeColor', 'none')

hold on

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10)

title('Banana Function Minimization (Rosenbrock Function)')

xlabel('x')

ylabel('y')

zlabel('f(x, y)')

grid on

21
RESULT:

Fig 7 – Surface plot of the Rosenbrock function with minimum point marked using fminunc.

CONCLUSION:

The experiment demonstrates the use of fminunc to solve the non-convex Rosenbrock
function. Despite the function’s curved valley structure, fminunc successfully found the
global minimum due to its robust quasi-Newton algorithm.

22
EXPERIMENT 8
AIM: Minimizing an Expensive Function Using Parallel Computing

Minimize the function

f(x, y) = (x – 3)² + (y + 1)² + sin(x)² + cos(y)²

This function is computationally expensive due to added sine and cosine terms. The goal is to
minimize it using fminunc with parallel computing enabled to accelerate evaluations.

APPARATUS:

●​ Computer​

●​ MATLAB Software​

●​ Parallel Computing Toolbox

PROCEDURE:

1.​ Define the objective function with some delay to simulate computational cost.​

2.​ Set the initial guess for the variables.​

3.​ Configure fminunc to use parallel evaluations.​

4.​ Run the optimization with parallel computing enabled.​

5.​ Plot the function surface and the result.

CODE:

x0 = [0, 0];

% Optimization options

options = optimoptions('fminunc', 'Algorithm', 'quasi-newton', ...

'Display', 'iter', ...

'UseParallel', true);

% Start parallel pool if not already started

23
if isempty(gcp('nocreate'))

parpool;

end

% Call fminunc

[x, fval] = fminunc(@expensiveFun, x0, options);

% Display result

disp('Minimum point:');

disp(x);

disp('Minimum value:');

disp(fval);

[x1, x2] = meshgrid(-2:0.1:6, -4:0.1:2); % Plotting (without pause)

z = (x1 - 3).^2 + (x2 + 1).^2 + sin(x1).^2 + cos(x2).^2;

figure;

surf(x1, x2, z, 'EdgeColor', 'none')

hold on

plot3(x(1), x(2), fval, 'r*', 'MarkerSize', 10)

title('Minimization of Expensive Function Using Parallel Computing')

xlabel('x')

ylabel('y')

zlabel('f(x, y)')

grid on

function f = expensiveFun(x) % Expensive function (in a separate function block)

pause(0.01); % simulate computation delay

f = (x(1)-3)^2 + (x(2)+1)^2 + sin(x(1))^2 + cos(x(2))^2;

end

24
RESULT:

Fig 8 – Surface plot of an expensive function, showing the minimum point located using parallel evaluation in
fminunc.

CONCLUSION:

By enabling parallel computing, the optimization process becomes significantly faster for
expensive function evaluations. This is ideal for simulations or models with high
computational cost per iteration.​

25
EXPERIMENT 9
AIM: Solve Nonlinear Problem with Many Variables

Minimize the function

f(x) = Σ₁ⁿ [(xᵢ – 1)² + sin²(xᵢ)]

This experiment involves optimizing a nonlinear function with many variables. The
purpose is to test the scalability of fminunc when the number of variables is large (e.g., n =
50).

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define a high-dimensional objective function.​

2.​ Initialize all variables to a common starting value.​

3.​ Use fminunc with appropriate options.​

4.​ Observe how well the solver handles a large number of variables.​

5.​ Plot the convergence history (optional).

CODE:

n = 50;

fun = @(x) sum((x - 1).^2 + sin(x).^2); % Objective function (sum of squared deviations and sinusoidal terms)

x0 = zeros(n,1); % Initial guess (vector of zeros)

% Optimization options

options = optimoptions('fminunc', ...

'Algorithm', 'quasi-newton', ...

'Display', 'iter', ...

'MaxIterations', 500);

26
[x, fval] = fminunc(fun, x0, options); % Solve the problem

disp('Optimal solution found at:');

disp(x');

disp('Minimum function value:');

disp(fval);

RESULT:

Fig 9 – Since this is a high-dimensional problem, no 2D or 3D plot is shown. The result is the optimized
variable vector.

CONCLUSION:

This experiment illustrates the ability of fminunc to handle large-scale optimization


problems. The solver efficiently minimized a 50-variable nonlinear objective, demonstrating
robustness for high-dimensional use cases in engineering and data modeling.

27
EXPERIMENT 10
AIM: Linear Programming Using linprog (Solver-Based

Minimize the linear objective function

f(x) = –5x₁ – 4x₂

subject to the constraints:

6x₁ + 4x₂ ≤ 24

x₁ + 2x₂ ≤ 6

–x₁ + x₂ ≤ 1

x₁ ≥ 0, x₂ ≥ 0

This experiment demonstrates how to solve a linear programming problem using MATLAB’s
linprog function.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define the linear cost function coefficients.​

2.​ Set up the inequality constraint matrix and right-hand side.​

3.​ Specify lower bounds for decision variables.​

4.​ Call linprog to solve the problem.​

5.​ Display and interpret the results.


6.​

CODE:

f = [-5; -4]; % Coefficients for the objective function

A = [6, 4; 1, 2; 1, 1]; % Inequality constraints A*x ≤ b

b = [24; 6; 1];

28
lb = [0; 0]; % Lower bounds (x1 ≥ 0, x2 ≥ 0)

options = optimoptions('linprog','Display','iter'); % Call linprog

[x, fval] = linprog(f, A, b, [], [], lb, [], options);

disp('Optimal point:');

disp(x);

disp('Optimal value of objective function:');

disp(fval);

RESULT:

Fig 10 – Text-based solution output showing optimal values of decision variables and minimum objective
function value.

CONCLUSION:

This experiment successfully applied linprog to solve a linear programming problem with
multiple inequality constraints. The method provides a fast and reliable solution for problems
involving linear cost functions and constraints, commonly used in operations research and
resource allocation.

29
EXPERIMENT 11
AIM: Maximize Long-Term Investments Using Linear Programming (Solver-Based)

Maximize the objective

Profit = 0.07x₁ + 0.12x₂ + 0.15x₃

subject to the constraints:

x₁ + x₂ + x₃ = 1 (total investment is 100%)

x₁ ≥ 0.2 (at least 20% in investment 1)

x₃ ≤ 0.5 (no more than 50% in investment 3)

This experiment applies linear programming to determine an optimal investment portfolio


using MATLAB’s linprog.

APPARATUS:

●​ Computer​

●​ MATLAB Software​

PROCEDURE:

1.​ Convert the maximization problem into a minimization by negating the profit
coefficients.​

2.​ Set up equality and bound constraints.​

3.​ Use linprog to solve for optimal investment proportions.​

4.​ Interpret the solution and analyze the allocation.

CODE:

f = [-0.07; -0.12; -0.15]; % Objective function (maximize => minimize negative profit)

Aeq = [1, 1, 1]; % Equality constraint: total investment = 1

beq = 1;

lb = [0.2; 0; 0];

ub = [1; 1; 0.5];

30
options = optimoptions('linprog','Display','iter'); % Call linprog

[x, fval] = linprog(f, [], [], Aeq, beq, lb, ub, options);

disp('Optimal investment fractions:');

disp(x);

disp('Maximum profit:');

disp(-fval); % Negated to show maximum

RESULT:

Fig 11 – Display of optimal investment distribution among three funds and the corresponding maximum
expected profit.

CONCLUSION:

The experiment demonstrates the use of linprog in financial optimization. The optimal
allocation meets all investment constraints while maximizing returns. This highlights the
practical application of linear programming in portfolio management and decision-making.

31
EXPERIMENT 12
AIM: Mixed-Integer Linear Programming Using intlinprog

Minimize the objective function

f(x) = –3x₁ – 5x₂

subject to:

2x₁ + x₂ ≤ 6

x₁ + x₂ ≤ 4

x₁, x₂ ≥ 0 and integer

This experiment demonstrates how to use intlinprog to solve a mixed-integer linear


programming (MILP) problem.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define the objective function coefficients.​

2.​ Specify inequality constraints in matrix form.​

3.​ Define variable bounds and specify integer constraints.​

4.​ Use intlinprog to solve the MILP.​

5.​ Output the integer solution and interpret the result.

CODE:

f = [-3; -5]; % Objective function coefficients (minimize)

A = [2, 1; 1, 1]; % Inequality constraints A*x ≤ b

b = [6; 4];

32
lb = [0; 0]; % Variable bounds

intcon = [1, 2]; % Integer variables: both x1 and x2

options = optimoptions('intlinprog','Display','iter'); % Call intlinprog

[x, fval] = intlinprog(f, intcon, A, b, [], [], lb, [], options);

disp('Optimal integer solution:');

disp(x);

disp('Minimum objective function value:');

disp(fval);

RESULT:

Fig 12 – Tabular output showing optimal integer values for decision variables and minimum cost.

CONCLUSION:

The experiment demonstrates how intlinprog handles problems where variables must be
integers. Such problems arise in real-life scenarios like scheduling, resource assignment, and
logistics, where fractional values are not acceptable.

33
EXPERIMENT 13
AIM: Factory-Warehouse-Sales Allocation Model Using MILP

Minimize total cost of shipping from factories to warehouses and then to sales points,

while satisfying demand and supply constraints, with some binary allocation decisions.

This experiment illustrates how to model and solve a supply chain problem using intlinprog.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define cost matrices for factory-to-warehouse and warehouse-to-sales links.​

2.​ Formulate the supply chain as a linear objective function.​

3.​ Add constraints for factory supply, warehouse capacity, and sales demand.​

4.​ Enforce binary (0/1) decisions on shipping routes using integer constraints.​

5.​ Solve the MILP using intlinprog.

CODE:

% Objective function coefficients (flattened cost vector)

f = [4; 6; 3; 5]; % Costs for 2 factories to 2 warehouses

% Binary decision variables

intcon = 1:4;

% Constraints: Total supply from factories <= 1 each

A = [1 0 1 0; % Factory 1 to WH 1 and 2

34
0 1 0 1]; % Factory 2 to WH 1 and 2

b = [1; 1];

% Demand from each warehouse = 1 unit (cover both routes)

Aeq = [1 1 0 0; % WH 1

0 0 1 1]; % WH 2

beq = [1; 1];

% Bounds: binary decisions (0 or 1)

lb = zeros(4,1);

ub = ones(4,1);

% Solve MILP

options = optimoptions('intlinprog','Display','iter');

[x, fval] = intlinprog(f, intcon, A, b, Aeq, beq, lb, ub, options);

% Display result

disp('Shipping decisions (0 = no ship, 1 = ship):');

disp(x);

disp('Total cost:');

disp(fval);

35
RESULT:

Fig 13 – Binary vector indicating shipping routes chosen and the minimum total cost.

CONCLUSION:

This experiment shows how intlinprog can be applied to supply chain logistics. By modeling
allocation as binary decisions, it effectively selects optimal routes under resource constraints,
demonstrating the power of MILP in real-world scenarios.

36
EXPERIMENT 14
AIM: Solve Sudoku Using Integer Programming

Formulate the Sudoku puzzle as a binary integer programming problem.

Each variable represents the presence (1) or absence (0) of a digit in a specific cell.

This experiment demonstrates how to model a combinatorial constraint satisfaction


problem using intlinprog.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Represent the Sudoku grid as a 9×9×9 binary variable (row, column, digit).​

2.​ Construct linear equality constraints to enforce:​

a.​ Each number appears once per row, column, and 3×3 subgrid.​

b.​ Each cell must contain exactly one number.​

c.​ Pre-filled cells (clues) are fixed as constraints.​

3.​ Use intlinprog to find a solution.​

4.​ Reconstruct the solved Sudoku grid from the binary solution vector.

CODE:

f = zeros(64,1); % 4x4 grid * 4 digits

Aeq = eye(64);

beq = ones(64,1);

intcon = 1:64;

lb = zeros(64,1);

37
ub = ones(64,1);

Aeq = [Aeq; zeros(1,64)];

Aeq(end,6) = 1;

beq = [beq; 1];

% Solve using intlinprog

options = optimoptions('intlinprog','Display','off');

[x, ~] = intlinprog(f, intcon, [], [], Aeq, beq, lb, ub, options);

disp('Partial binary solution (reshaped for display):');

disp(reshape(x, [4, 16])');

RESULT:

Fig 14 – Binary matrix showing valid assignment of digits to cells in a 4×4 Sudoku grid.

CONCLUSION:

This experiment shows how integer programming can solve constraint satisfaction problems
like Sudoku. Although solving a full 9×9 Sudoku requires a large binary variable set and
complex constraints, intlinprog proves effective for structured logical problems.

38
EXPERIMENT 15
AIM: Quadratic Programming with Bound Constraints

To solve a quadratic programming problem with bound constraints using the quadprog
function in MATLAB.

APPARATUS:

●​ Computer​

●​ MATLAB Software

PROCEDURE:

1.​ Define the quadratic objective function in the form:​

2.​ Define bound constraints as vectors lb (lower bounds) and ub (upper bounds).​

3.​ Use the quadprog function to solve the problem:


4.​ Interpret the solution vector x.

CODE:

% Define the quadratic and linear terms

H = [4 -2; -2 4]; % Hessian matrix

f = [-6; -8]; % Linear term

% Define bounds

lb = [0; 0]; % Lower bounds

ub = [inf; 2]; % Upper bounds

% Solve using quadprog

39
x = quadprog(H, f, [], [], [], [], lb, ub);

% Display result

disp('Optimal solution:');

disp(x);

% Objective function value

fval = 0.5 * x' * H * x + f' * x;

fprintf('Minimum value of objective function: %.2f\n', fval);

RESULT:

CONCLUSION:

The quadprog function successfully minimized the quadratic function under the given bound
constraints. This method is efficient for convex quadratic problems commonly found in
control systems and portfolio optimization.

40
EXPERIMENT 16
AIM: Portfolio Optimization Using Quadratic Programming

To optimize a portfolio by minimizing the risk (variance) subject to return and budget
constraints using quadratic programming in MATLAB.

APPARATUS:

●​ MATLAB R2020b​

●​ Optimization Toolbox

PROCEDURE:

1.​ Define the covariance matrix H representing the variance between assets.​

2.​ Define the expected returns vector r.​

3.​ Set the desired return level as an equality constraint.​

4.​ Include the constraint that the sum of all asset weights must equal 1 (fully invested).​

5.​ Use quadprog to solve the quadratic programming problem.

CODE:

% Covariance matrix (risk)

H = [0.1 0.01 0.02;

0.01 0.12 0.03;

0.02 0.03 0.15];

% No linear term in objective (risk-only minimization)

f = [0; 0; 0];

% Expected returns of the assets

r = [0.1; 0.2; 0.15];

41
% Desired portfolio return

targetReturn = 0.16;

% Equality constraints: [return constraint; sum(weights) = 1]

Aeq = [r';

1 1 1];

beq = [targetReturn;

1];

% Lower and upper bounds (no short selling, max 100% in one asset)

lb = [0; 0; 0];

ub = [1; 1; 1];

% Solve using quadprog

x = quadprog(H, f, [], [], Aeq, beq, lb, ub);

% Display result

disp('Optimal asset weights:');

disp(x);

% Objective function value (portfolio risk)

risk = 0.5 * x' * H * x;

fprintf('Minimum risk (variance): %.4f\n', risk);

42
RESULT:

Fig 16 Optimal weights of assets in the portfolio and the minimized risk value.

CONCLUSION:

Quadratic programming efficiently solved the portfolio optimization problem by minimizing


risk for a specified return. This approach is fundamental in finance for constructing optimal
asset allocations under return constraints.

43
EXPERIMENT 17
AIM: Multiobjective Optimization Using fgoalattain

To perform multiobjective optimization using the fgoalattain function in MATLAB by


minimizing the difference between the goals and the actual objective values under given
constraints.

APPARATUS:

●​ MATLAB R2020b or later​

●​ Optimization Toolbox

PROCEDURE:

1.​ Define multiple objective functions to be minimized simultaneously.​

2.​ Specify goal values for each objective.​

3.​ Define weightings for each objective (relative importance).​

4.​ Define any constraints (e.g., bounds or linear constraints).​

5.​ Use the fgoalattain function to solve the multiobjective problem.

CODE:

% Multiobjective function (accepts varargin to avoid error)

objfun = @(x, varargin)[x(1)^2 + x(2)^2;

(x(1)-1)^2 + (x(2)-2)^2];

% Initial guess

x0 = [0; 0];

% Goal for each objective

goal = [1; 1];

44
% Weights for each goal

weight = [1; 1];

% Lower and upper bounds

lb = [-5; -5];

ub = [5; 5];

% Create options using optimoptions

options = optimoptions('fgoalattain', 'Display', 'iter');

% Call fgoalattain with all required inputs

[x, fval] = fgoalattain(objfun, x0, goal, weight, ...

[], [], [], [], [], [], [], options, lb, ub);

% Display result

disp('Optimal solution:');

disp(x);

disp('Objective values at optimum:');

disp(fval);

45
RESULT:

Fig 17 Optimized decision variables and the values of each objective function at the optimal point.

CONCLUSION:

The fgoalattain function successfully handled the multiobjective problem by finding a


compromise solution that gets as close as possible to the specified goals. This method is
valuable when multiple conflicting objectives must be balanced in engineering or economics.​

46
EXPERIMENT 18
AIM: Generate and Plot Pareto Front

To generate and visualize the Pareto front for a multiobjective optimization problem using
MATLAB’s gamultiobj function.

APPARATUS:

●​ MATLAB R2020b or later​

●​ Global Optimization Toolbox

PROCEDURE:

1.​ Define two or more conflicting objective functions in a single vectorized function.​

2.​ Use gamultiobj to minimize the objectives simultaneously.​

3.​ Extract the Pareto-optimal solutions and corresponding objective values.​

4.​ Plot the Pareto front to visualize trade-offs between objectives.

CODE:

objfun = @(x)[x(1)^2 + x(2)^2; % Objective 1

(x(1)-1)^2 + (x(2)-2)^2]'; % Objective 2 (transpose for gamultiobj)

nvars = 2;

lb = [-5, -5];

ub = [5, 5];

[x, fval] = gamultiobj(objfun, nvars, [], [], [], [], lb, ub);

figure;

plot(fval(:,1), fval(:,2), 'ro-', 'LineWidth', 2);

xlabel('Objective 1'); ylabel('Objective 2');

title('Pareto Front');

grid on;

47
RESULT:

Fig 18 A 2D plot showing the Pareto front, where each point represents a trade-off between the two objectives.

CONCLUSION:

The Pareto front provides insight into trade-offs between multiple conflicting objectives. The
gamultiobj function in MATLAB efficiently finds a diverse set of non-dominated solutions,
useful in decision-making scenarios.

48
EXPERIMENT 19
AIM: Problem-Based Linear Programming

To solve a linear programming problem using MATLAB’s problem-based optimization


approach.

APPARATUS:

●​ MATLAB R2020b or later​

●​ Optimization Toolbox

PROCEDURE:

1.​ Define optimization variables using optimvar.​

2.​ Formulate the objective function as a linear expression.​

3.​ Define linear constraints using MATLAB expressions.​

4.​ Create an optimproblem object and assign objective and constraints.​

5.​ Solve the problem using solve.

CODE:

% Define optimization variables

x = optimvar('x', 2, 'LowerBound', 0); % x(1) and x(2) ≥ 0

% Define objective function: Minimize 2*x1 + 3*x2

obj = 2*x(1) + 3*x(2);

% Define individual constraints using named fields

cons.cons1 = x(1) + x(2) <= 4;

cons.cons2 = x(1) - x(2) >= 1;

% Define the optimization problem

49
prob = optimproblem('Objective', obj, 'Constraints', cons);

% Solve the problem

[sol, fval] = solve(prob);

% Display the result

disp('Optimal solution:');

disp(sol);

fprintf('Minimum objective function value: %.2f\n', fval);

RESULT:

Fig 19 Optimal values of x(1) and x(2) and the minimum value of the objective function.

CONCLUSION:

MATLAB’s problem-based approach offers an intuitive way to define and solve linear
programming problems symbolically, improving code readability and flexibility.

50
EXPERIMENT 20
AIM: Problem-Based Nonlinear Optimization

To solve a nonlinear optimization problem using MATLAB’s problem-based optimization


approach.

APPARATUS:

●​ MATLAB R2020b or later​

●​ Optimization Toolbox

PROCEDURE:

1.​ Define optimization variables with bounds using optimvar.​

2.​ Define a nonlinear objective function using optimization variables.​

3.​ Define nonlinear and linear constraints as needed.​

4.​ Set up the optimization problem with optimproblem.​

5.​ Solve the problem using solve.

CODE:

% Coordinates of the polygon vertices in clockwise order

x = [0, -1, -1, 1, 1, 0, 0];

y = [0, 0, 1, 1, -1, -1, 0];

% Plot the polygon

plot(x, y)

xlim([-1.2 1.2]) % Set limits for x-axis

ylim([-1.2 1.2]) % Set limits for y-axis

axis equal % Ensure equal scaling on both axes

51
RESULT:

Fig 20 Optimal values of x1 and x2 that minimize the nonlinear objective under constraints.

CONCLUSION:

MATLAB’s problem-based optimization makes it straightforward to define and solve


nonlinear problems with bounds and nonlinear constraints, suitable for complex engineering
optimization tasks.

52

You might also like