Multiobjective Optimization - Exercises: 1 Basic Concepts

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

UNIVERSIDADE FEDERAL DE MINAS GERAIS

s-Graduac
o em Engenharia Ele
trica
Programa de Po
a

Multiobjective Optimization - Exercises


Prof. Frederico Gadelha Guimaraes

Basic concepts

Exercise 1
Define local minimum and global minimum.
Exercise 2
Define unimodal and multimodal function.
Exercise 3
Write the necessary and sufficient conditions for characterizing a local minimum for
i. an unconstrained optimization problem.
ii. a constrained optimization problem.
Exercise 4
Let f (x) = 100(x2 x21 )2 + (1 x1 )2 subject to g(x) = x21 + x22 2. Verify if the necessary
conditions for a local minimum are satisfied at (1, 1)T .
Exercise 5
It is known that the objective function
f (x) = 2x21 + x1 x2 + x22 + x2 x3 + x23 6x1 7x2 8x3 + 9
has a local minimum at x = (6/5, 6/5, 17/5)T .
i. Verify if the necessary conditions are valid at x .
ii. Verify if x is also the global minimum.
Exercise 6
Calculate the first and second derivatives of the function below at x = 0:
f (x) = x41 + x1 x2 + (1 + x2 )2
i. Show that H(0) is not definite positive.
ii. Determine the local minimum of the function.

s-Graduac
o em Engenharia Ele
trica PPGEE/UFMG
Programa de Po
a

Karush-Kuhn-Tucker conditions

Exercise 7
Consider the following constrained minimization problem:
min f (x) = x21 x22

x1 + x2 3

x 2
1
subject to
x 1 0

x2 0
i. Draw the feasible region and some level curves of the objective function.
ii. Identify the solution of the problem.
iii. Show geometrically that the Karush-Kuhn-Tucker conditions are satisfied at the solution.
iv. Show geometrically that the Karush-Kuhn-Tucker are also satisfied at (2, 1)T and explain
why, since this is not the solution of the problem.
Exercise 8
Consider the maximization of the function f (x) = x2 , 1 x 2. Show that the Karush-KuhnTucker conditions are met at x = 1, x = 0, and x = 2, although the global optimum is x = 2.
Discuss.
Exercise 9
Solve the minimization problem below analytically using the optimality conditions for constrained
problems.
min f (x) = x1 x2
(
g(x) = x21 + 3x22 3
sujeito a
h(x) = x1 + 3x2 9 = 0
Exercise 10
Consider the following problem:
4
2
2
min f (x) = x41 + x
2 + 12x1 + 6x2 x1 x2 x1 x2
g1 (x) = x1 + x2 6

subject to g2 (x) = 2x1 x2 3

x1 0; x2 0

i. Write the equations for the optimality conditions.


ii. show that (3, 3)T is the only solution.

One-dimensional minimization

Exercise 11
Derive the one-dimensional minimization problem for the following case:
min f (x) = (x21 x2 )2 + (1 x1 )2
from the starting point x1 = (2, 2)T along the search direction d1 = (1.00, 0.25)T .
Exercise 12
Find the minimum of f (x) = x(x 1.5) in the interval [0.0, 1.0] to within 10% of the exact value
using:

s-Graduac
o em Engenharia Ele
trica PPGEE/UFMG
Programa de Po
a

i. the dichotomous search method.


ii. the interval halving method.
Exercise 13
Given the initial interval [a, b], it is possible to calculate the number of iterations required by the
Golden Section method such that (b a) . Show how to calculate the number of iterations
required.
Exercise 14
Find the minimum of the function
1
0.75
0.65 tan1
1 + 2

using the secant method with an initial step size of t0 = 0.1, 1 = 0.0 and  = 0.01. Plot the
graph of the function in the interval [0, 3] and identify its minimum.
f () = 0.65

Unconstrained methods

Exercise 15
Let f (x) = x21 + 25x22 and x0 = (2, 2)T .
i. Apply the Gradient method to minimize f (x).
ii. Apply the Newton method to minimize f (x).
Exercise 16
Let the minimization of f (x) = x31 + x1 x2 x21 x22 and x0 = (1, 1)T . A computational program
carefully programmed to execute the Newton method on this problem was not successful. Discuss
the possible reasons for this failure.
Exercise 17
Derive the update formula of the Newton method.
Exercise 18
Compare the gradients of
f (x) = 100(x2 x21 )2 + (1 x1 )2
at the point (0.5, 0.5)T given by the following methods:
i. analytical differentiation.
ii. forward difference method.
iii. central difference method.
Exercise 19
Show that an element of the Hessian matrix can be approximated with forward finite differences
as:

2 f
f (x + i ei + j ej ) f (x + i ei ) f (xj + j ej ) + f (x)


xi xj x
i j
Exercise 20
What are the advantages of quasi-Newton optimization methods?
Exercise 21
Consider the minimization of f (x) = (x1 + 2x2 7)2 + (2x1 + x2 5)2 . If a base simplex is
defined by the vertices






2
3
1
x1 =
; x2 =
; x3 =
2
0
1
find a sequence of four improved vectors using reflection, expansion and/or contraction.

s-Graduac
o em Engenharia Ele
trica PPGEE/UFMG
Programa de Po
a

Exercise 22
Consider the minimization of the objective function f (x) = 100(x2 x21 )2 + (1 x1 )2 from the
initial point (1.2, 1.0)T .
i. Perform two iterations of the steepest descent method.
ii. Perform two iterations of the Fletcher-Reeves method.
iii. Perform two iterations of the BFGS method.

Constrained methods

Exercise 23
Consider the following problem:
(min f (x) = 5x1
g1 (x) = x1 + x2 0
subject to
g2 (x) = x21 + x22 4 0
i. Draw the feasible region and determine the solution geometrically.
ii. Write a barrier function that could be used to solve this problem.
iii. Write a penalty function that could be used to solve this problem.
iv. Verify the Karush-Kuhn-tucker conditions at the solution.
v. Verify that these conditions are not satisfied at any other feasible point.
Exercise 24
Constrast Penalty methods and Augmeted Lagrangian methods, highlighting the pros and cons of
each approach.
Exercise 25
In Penalty function methods we utilize a single parameter to penalize all the constraints. What
are the advantages of using one parameter for each function? Suggest an update scheme for these
parameters.
Exercise 26
Let the problem: min f (x) = x3 , subject to h(x) = x 1 = 0; for which the optimal solution is
x = 1.
i. Write a penalty function that transforms the original constrained problem into an unconstrained problem.
ii. Calculate the solution of the problem for u = 1, 10, 100.
iii. Make u and show that the solution converges to x = 1.
Exercise 27
Consider the following problem
2
2
min
( f (x) = x1 + x2
g1 (x) = 2x1 + x2 2 0
subject to
g2 (x) = 1 x2 0

i. Determine the optimal solution.


ii. Choose a penalty function, make u0 = 1 and x0 = (2, 6)T , and determine x1 by using the
Gradient method.

s-Graduac
o em Engenharia Ele
trica PPGEE/UFMG
Programa de Po
a

iii. Choose a penalty function, make u0 = 1 and x0 = (2, 6)T , and determine x2 by using the
Conjugate Gradient method.
Exercise 28
Let the objective function f (x) = 6x21 + 4x1 x2 + 3x22 be minimized subject to the constraint
h(x) = x1 + x2 = 5.
i. Write the augmented Lagrangian function for this problem.
ii. Making u0 = 2 and 0 = 0, perform three iterations of the Augmented Lagrangian method,
finding the minimumfrom direct application of the first order optimality condition.
iii. Making u0 = 20 and 0 = 0, perform three iterations of the Augmented Lagrangian method,
finding the minimum from direct application of the first order optimality condition.
Exercise 29
Show that the Lagrangian and the Augmented Lagrangian have the same critical points.
Exercise 30
Show that the update formula for the Lagrange multiplier associated with an inequality constraint
is given by:


i,k
i,k+1 = i,k + uk max gi (xk+1 ),
uk
Exercise 31
Show that the Augmented Lagrangian function:
A (x, , , u) = (x, , ) +

u
p(x)
2

can be written in the equivalent form below:


2

u X h
i i2 X
j
A (x, , , u) = f (x) +
max gi (x),
+
hj (x) +
2 i
u
u
j
Exercise 32
Consider the problem:

subject to

n min f (x) = x1 x2
g1 (x) = 3x21 2x1 x2 + x22 1 0

i. Generate the approximating LP problem at x0 = (2, 2)T .


ii. Solve the approximating LP problem using graphical method and find whether the resulting
solution is feasible or not.
Exercise 33
Find the solution of the following problem using the MATLAB function fmincon with the starting
point x0 = (1.0, 1.0)T .
2
2
min
f (x) = x1 + x2
2

g1 (x) = 4 x1 x2 0
subject to g2 (x) = 3x2 x1 0

g3 (x) = 3x2 x1 0

s-Graduac
o em Engenharia Ele
trica PPGEE/UFMG
Programa de Po
a

Multiobjective problems

Exercise 34
Define Pareto-optimal solution, Pareto-optimal set and Pareto front.
Exercise 35
Show that the weighted sum method (Pw formulation) might not produce all the efficient solutions
in some cases.
Exercise 36
Discuss the disadvantages of the -constrained method.
Exercise 37
Show that it is possible to have the Karush-Kuhn-Tucker conditions satisfied at non Pareto-optimal
points.
Exercise 38
Determine the set of Pareto-optimal solutions of the following multiobjective problem:
f (x) = x1 + x2
min 1
2
f2 (x) = x1
2
2

g1 (x) = x1 + x2 4
subject to g2 (x) = x2 1

g3 (x) = x1 x2 2 0
Exercise 39
Determine the set of Pareto-optimal solutions of the following multiobjective problem:
min

f1 (x) = x21 x22


f2 (x) = (x1 3)2 + (x2 3)2

You might also like