0% found this document useful (0 votes)
2 views4 pages

t1 Sol

The document outlines a tutorial on optimization methods, including steepest descent and Newton's method, applied to various functions. It provides detailed calculations for finding optimal solutions, including gradients, Hessians, and Lagrange multipliers. The tutorial concludes with a quadratic optimization problem, verifying necessary conditions for optimality and confirming the solution as (12, 9).

Uploaded by

bermudaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views4 pages

t1 Sol

The document outlines a tutorial on optimization methods, including steepest descent and Newton's method, applied to various functions. It provides detailed calculations for finding optimal solutions, including gradients, Hessians, and Lagrange multipliers. The tutorial concludes with a quadratic optimization problem, verifying necessary conditions for optimality and confirming the solution as (12, 9).

Uploaded by

bermudaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

BT4240 Tutorial # 1, January 28, 2025.

!
1
1. Starting from initial point x0 = , compute x1 using the steepest descent approach
−1
with optimal stepsize (exact line search) to find the minimum of the function

f (x) = f (x1 , x2 ) = x31 + x32 − 2x21 + 3x22 − 8

Also compute the gradient at x1 .

!
3x21 − 4x1
∇f (x1 , x2 ) =
3x22 + 6x2
!
−1
∇f (1, −1) =
−3
!
1
d = −∇f (1, −1) =
3
!
1+λ
x + λd =
−1 + 3λ
!
3(1 + λ)2 − 4(1 + λ)
∇f (x + λd) =
3(−1 + 3λ)2 + 6(−1 + 3λ)

f (λ) = ∇f (x + λd)T d = 3(1 + λ)2 − 4(1 + λ) + 9(−1 + 3λ)2 + 18(−1 + 3λ)
= −10 + 2λ + 84λ2 = 0

Set the derivative to 0, and obtain



−2 ± 4 + 3360
λ =
168
1 5
λ1 = , λ2 = −
3 14
Updated weight x1 is

x1 = x0 + λ 1 d
! !
1 1 1
= +
−1 3 3
!
4
= 3
0
!
0
∇f (x1 ) =
0

2. Use the steepest descent method to approximate the optimal solution to the problem

min x21 + 2x22 − 2x1 x2 − 2x2

with fixed stepsize λ = 0.1.

1
!
0.5
Starting from the point x0 = , show the first few iterations.
0.5
!
2x1 − 2x2
g(x1 , x2 ) = ∇f (x1 , x2 ) =
4x2 − 2x1 − 2
The first few iterations:

k x1 x2 g1 g2 f(x1,x2)
1 0.50000 0.50000 0.00000 -1.00000 -0.75000
2 0.50000 0.60000 -0.20000 -0.60000 -0.83000
3 0.52000 0.66000 -0.28000 -0.40000 -0.86480
4 0.54800 0.70000 -0.30400 -0.29600 -0.88690
5 0.57840 0.72960 -0.30240 -0.23840 -0.90402
.....
96 0.99969 0.99981 -0.00024 -0.00015 -1.00000
97 0.99972 0.99982 -0.00022 -0.00013 -1.00000
98 0.99974 0.99984 -0.00020 -0.00012 -1.00000
99 0.99976 0.99985 -0.00019 -0.00011 -1.00000
100 0.99978 0.99986 -0.00017 -0.00011 -1.00000

Solution: x∗ = (1, 1).


3. Use the Newton’s method to find the optimal solution to the problem
min x21 + 2x22 − 2x1 x2 − 2x2
!
0.5
starting from xo = .
0.5
Compute the Hessian and its inverse, then update x :
!
2 −2
H(x1 , x2 ) =
−2 4
!
−1 1 4 2
H (x1 , x2 ) =
4 2 2
x1 = x0 − H −1 (0.5, 0.5)∇f (0.5, 0.5)
! ! ! ! ! !
0.5 1 0.5 0 0.5 −0.5 1
= − = − =
0.5 0.5 0.5 −1 0.5 −0.5 1
The solution is obtained after just one iteration if the function is quadratic.
minimize (x31 −x2 )2 +2(x2 −x1 )4 . If Newton’s method is applied
4. Consider the problem to !
0
starting from x0 = , what will x1 be?
1
!
2(x31 − x2 )(3x21 ) + 8(x2 − x1 )3 (−1)
∇f (x1 , x2 ) =
2(x31 − x2 )(−1) + 8(x2 − x1 )3
!
6x21 (x31 − x2 ) − 8(x2 − x1 )3
=
−2(x31 − x2 ) + 8(x2 − x1 )3
!
−8
∇f (0, 1) =
10

2
Compute the Hessian and its inverse, then update x :
!
12x1 (x31 − x2 ) + 6x21 (3x21 ) + 24(x2 − x1 )2 −6x21 − 24(x2 − x1 )2
H(x1 , x2 ) =
−6x21 − 24(x2 − x1 )2 2 + 24(x2 − x1 )2
!
24 −24
H(0, 1) =
−24 26
!
13 1
H −1 (0, 1) = 24
1
2
1
2 2
! ! !
13 1
−1 0 24 2 −8
x1 = x0 − H (0, 1)∇f (0, 1) = − 1 1
1 2 2 10
!
− 32
=
0

5. Consider the problem:


minimize (x1 − 3)2 + (x2 − 2)2
s.t.
x21 + x22 ≤ 5
x1 + 2x2 ≤ 4
−x1 ≤ 0
−x2 ≤ 0
(a) Verify that the necessary optimality conditions hold at the solution point (2, 1).
First note that the first 2 constraints are binding at x = (2, 1). Thus the Lagrangian
multipliers λ3 and λ4 associated with the other 2 constraints −x1 ≤ 0 and −x2 ≤ 0
respectively, are equal to zero.
Note also that
! ! ! ! !
2(x1 − 3) −2 2x1 4 1
∇f (x) = = , ∇g1 (x) = = , ∇g2 (x) =
2(x2 − 2) −2 2x2 2 2
1 2
Thus λ1 = 3 and λ2 = 3 will satisfy the KKT conditions:
∇f (x) + λ1 ∇g1 (x) + λ2 ∇g2 (x) = 0.
! ! ! !
−2 1 4 2 1 0
+ + =
−2 3 2 3 2 0
We obtain the solution as the problem has a convex objective function and the
constraints are also all convex.
(b) Check whether the KKT conditions are satisfied at the point xˆT = (0, 0).
Here the 3rd and 4th constraints are binding, while the 1st and 2nd are not. Hence,
let λ1 = λ2 = 0. Note that
! ! !
−6 −1 0
∇f (x̂) = , ∇g3 (x̂) = , ∇g4 (x̂) =
−4 0 −1
Hence, we want to find λ3 ≥ 0 and λ4 ≥ 0 such that
∇f (x̂) + λ3 ∇g3 (x̂) + λ4 ∇g4 (x̂) = 0.
This is not possible.

3
6. Consider the quadratic optimization problem:
min −15x1 − 30x2 − 4x1 x2 + 2x21 + 4x22
subject to
x1 + 2x2 ≤ 30
x1 ≥ 0
x2 ≥ 0

(a) Find the matrix Q and vector c such that the objective function above can be
expressed as 12 xT Qx + cT x.

f (x1 , x2 ) = −15x1 − 30x2 − 4x1 x2 + 2x21 + 4x22


!
−15 − 4x2 + 4x1
∇f (x1 , x2 ) = Qx + c =
−30 − 4x1 + 8x2
!
4 −4
Q = H(x1 , x2 ) =
−4 8
!
−15
c =
−30

Check that 12 xT Qx + cT x = −15x1 − 30x2 − 4x1 x2 + 2x21 + 4x22 :


! ! !
4 −4 x1 4x1 − 4x2
Qx =
−4 8 x2 −4x1 + 8x2
!
4x1 − 4x2
xT Qx = (x1 x2 ) = 4x21 − 8x1 x2 + 8x22
−4x1 + 8x2
1 T
x Qx + cT x = −15x1 − 30x2 − 4x1 x2 + 2x21 + 4x22
2
(b) Show that the solution of the problem is (x1 , x2 ) = (12, 9).
We check all the necessary conditions:
ˆ Primal feasibility:
x1 + 2x2 = 12 + 2(9) − 30 ≤ 0
−x1 ≤ 0
−x2 ≤ 0
ˆ Dual feasibility: let λ2 = λ3 = 0. Need to find λ1 ≥ 0 that satisfies the
stationarity condition:
! ! !
−15 − 4x2 + 4x1 1 0
+ λ1 =
−30 − 4x1 + 8x2 2 0
For x1 = 12, x2 = 9,
! ! !
−3 1 0
+ λ1 =
−6 2 0
is true with λ1 = 3.
Summary: all the necessary conditions are satisfied. The objective function and
constraints are convex, hence (12, 9) is the solution of QP.

You might also like