0% found this document useful (0 votes)
17 views43 pages

MAE Optimization Lecture 3 Handout

The document discusses unconstrained optimization, focusing on optimality conditions, including first and second-order necessary and sufficient conditions. It defines global and local solutions, stationary points, and illustrates these concepts with examples of minima, maxima, and saddle points. The content is structured into sections covering definitions, necessary conditions, and proofs related to optimization problems.

Uploaded by

Samuel Jiménez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views43 pages

MAE Optimization Lecture 3 Handout

The document discusses unconstrained optimization, focusing on optimality conditions, including first and second-order necessary and sufficient conditions. It defines global and local solutions, stationary points, and illustrates these concepts with examples of minima, maxima, and saddle points. The content is structured into sections covering definitions, necessary conditions, and proofs related to optimization problems.

Uploaded by

Samuel Jiménez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

1MAE004 - Optimization

Unconstrained Optimization

E. Flayac

March 11th, 2024

Introduction to Optimization | | March 11th | Slide 1/43


Outline

Optimality conditions for unconstrained optimization

Numerical Optimization

Descent direction methods

Introduction to Optimization | | March 11th | Slide 2/43


Optimality conditions for unconstrained optimization

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 3/43
Unconstrained Optimization Problem

We focus on the case where A = Rn . We obtain the following problem:

min f (x) (Punc )


x∈Rn

Definitions
▶ We say that x∗ ∈ Rn is a global solution (global minimum) of
(Punc ) if ∀x ∈ Rn , f (x∗ ) ≤ f (x).
▶ We say that x∗ ∈ Rn is a local solution (local minimum) of (Punc ) if
∃r > 0, ∀x ∈ B(x∗ , r ), f (x∗ ) ≤ f (x).
▶ We say that x∗ ∈ Rn is a strict local solution (strict local minimum)
of (Punc ) if there exists r > 0 such that:
f (x∗ ) < f (x), ∀x ∈ B(x∗ , r )\{x∗ }.

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 4/43
Example
f (x)

Local Minimum x

Global Minimum

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 5/43
Necessary Optimality Conditions

min f (x) (Punc )


x∈Rn

First-Order Necessary Optimality Condition


Let x∗ ∈ Rn . If f ∈ C 1 (Rn , R) and x∗ is a local solution of (Punc ) then
∇f (x∗ ) = 0. (1)

Remarks
▶ An element x∗ ∈ Rn satisfying ∇f (x∗ ) = 0 is called a stationary
point or a critical point.
▶ Condition (1) is necessary but not sufficient, e.g., n = 1,
f (x) = −x 2 , f ′ (x) = −2x.
In this case, x ∗ = 0 is a stationary point (f ′ (0) = 0) but a global
maximum.

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 6/43
Necessary Optimality Conditions: proof
First-Order Necessary Optimality Condition: Proof
▶ Taylor expansion at x∗ for h ∈ Rn
f (x∗ + h) = f (x∗ ) + ∇f (x∗ )T h + ||h||ϵ(h)
For d ∈ Rn and t > 0 and h = td
f (x∗ + td) = f (x∗ ) + t∇f (x∗ )T d + t||d||ϵ(td)
▶ For t small, x + td ≈ x, thus f (x∗ ) ≤ f (x∗ + td) as x∗ is a local
minimum.
∗
f (x
 ) ≤ ∗
f (x ) + t∇f (x∗ )T d + t||d||ϵ(td)
Which implies by dividing by t > 0:
0 ≤ ∇f (x∗ )T d + ||d||ϵ(td)
| {z }
→0 when t→0

▶ One gets ⟨∇f (x∗ ), d⟩ ≥ 0 for any d ∈ Rn


▶ Thus, by (d ← −d), ⟨∇f (x∗ ), d⟩ = 0 for any d ∈ Rn ,
and ∇f (x∗ ) = 0
Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 7/43
Necessary Optimality Conditions: Remark

Remark
Taylor expansion at x∗ for h ∈ Rn
f (x∗ + h) = f (x∗ ) + ∇f (x∗ )T h + ||h||ϵ(h)
If then ∇f (x∗ ) = 0 (stationary point) , then
f (x∗ + h) = f (x∗ ) + ||h||ϵ(h)
f (x∗ + h) ≈ f (x∗ )
Therefore:
▶ f is approximately constant around x∗
▶ The graph of f is ”flat” around x∗
▶ Question : do we have
▶ f (x∗ + h) > f (x∗ ) for any h ?
▶ f (x∗ + h) < f (x∗ ) for any h ?
▶ or something else ?

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 8/43
Example of stationary point in 2D : Minimum

20
z

10

−2 −2
−1
0 0
1
y 2 2 x
Graph of f1 (x, y ) = 3x 2 + 2.5y 2 with curves y = 0 (blue) and x = 0 (red)

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 9/43
Example of stationary point in 2D : Maximum

−10
z

−20

−2 −2
−1
0 0
1
y 2 2 x
Graph of f2 (x, y ) = −3x 2 − 2.5y 2 with curves y = 0 (blue) and x = 0 (red)

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 10/43
Example of stationary point in 2D : Saddle point

10
z

−10

−2

0 −2

y 0
2
2 x
Graph of f3 (x, y ) = 3x 2 − 2.5y 2 with curves y = 0 (blue) and x = 0 (red)

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 11/43
Example of stationary point in 2D : other

10
z

0
−10

−2

0 −2

y 0
2
2 x
Graph of f4 (x, y ) = 3x 2 − 2.5y 3 with curves y = 0 (blue) and x = 0 (red)

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 12/43
Examples of stationary points: minimum and maximum

x∗
  
0
Set x∗ = ∗ = .
y 0
Stationary point and minimum
 
6x
f1 (x, y )= 3x 2 + 2.5y 2 ∇f1 (x, y ) =
5y
 
∗ ∗ 0
∇f1 (x , y )= ⇒ x∗ is a stationary point and a (global) minimum
0
Stationary point and maximum
 
2 2 −6x
f2 (x, y )= −3x − 2.5y ∇f1 (x, y ) =
−5y
 
∗ ∗ 0
∇f2 (x , y )= ⇒ x∗ is stationary point and a (global) maximum
0

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 13/43
Examples stationary points: saddle points and other

x∗
  
0
Set x∗ = ∗ = .
y 0
Stationary point and a saddle point
 
6x
f3 (x, y )= 3x 2 − 2.5y 2 ∇f3 (x, y ) =
−5y
 
0
∇f3 (x ∗ , y ∗ )= ⇒ x∗ is a stationary point and a saddle point
0
Other stationary point
 
2 3 6x
f4 (x, y )= 3x − 2.5y ∇f4 (x, y ) =
−7.5y 2
 
∗ ∗ 0
∇f4 (x , y )= ⇒ x∗ is another type of stationary point
0

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 14/43
Necessary Optimality Conditions: Remark

Remark
If x∗ is a stationary point:
▶ f is approximately constant around x∗
▶ The graph of f is ”flat” around x∗
▶ Question : do we have
▶ f (x∗ + h) > f (x∗ ) for any h ?
▶ f (x∗ + h) < f (x∗ ) for any h ?
▶ or something else ?

▶ Answer: we cannot conclude using only ∇f (x∗ )

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 15/43
Necessary optimality conditions

min f (x) (Punc )


x∈Rn

Second-order necessary optimality condition


Let x∗ ∈ Rn . If f ∈ C 2 (Rn , R) at x∗ and x∗ is a local solution of (Punc ),
then,
∇f (x∗ ) = 0 and ∇2 f (x∗ ) ⪰ 0. (2)

Remarks
Condition (2) is necessary but still not sufficient, e.g
▶ n = 1, f (x) = −x 4 , f ′ (x) = −4x 3 f ′′ (x) = −12x 2 ,
▶ For x ∗ = 0 one gets f ′ (x ∗ ) = f ′ (0) = 0 and f ′′ (x ∗ ) = f ′′ (0) = 0
▶ In this case, x ∗ = 0 satisfies (2) but it is a global maximum.

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 16/43
Sufficient optimality conditions
Second-order sufficient optimality condition
Let x∗ ∈ Rn . If f ∈ C 2 (Rn , R) and x∗ satisfies:
∇f (x∗ ) = 0 and ∇2 f (x∗ ) ≻ 0,
then x∗ is a strict local solution of (Punc ).
Sketch of proof
▶ Taylor expansion (of order 2) at x∗ for h ∈ Rn
f (x∗ + h) = f (x∗ ) + ∇f (x∗ )T h + hT ∇2 f (x∗ )h + ||h||2 ϵ(h)
▶ We assumed that ∇f (x∗ ) = 0 so we get:
f (x∗ + h) = f (x∗ ) + hT ∇2 f (x∗ )h +||h||2 ϵ(h)
| {z } |{z}
>0 as ∇2 f (x∗ )≻0 ≈0 for h≈0

▶ Hence for h ≈ 0, f (x∗ + h) > f (x∗ ) which implies that x∗ is a strict


local solution of (Punc )

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 17/43
Examples of stationary points: minimum and maximum
x∗
  
0
Set x∗ = ∗ = .
y 0
Stationary point and minimum
20

 

z
∗ ∗ 0 10

∇f1 (x , y )=
0 0

−2
−1
−2

  0 0

6 0
1
y x
∗ ∗
2 2
2
∇ f1 (x , y )= ≻0
0 5 f1 (x, y ) = 3x 2 + 2.5y 2

Stationary point and maximum 0

  −10

z
∗ ∗ 0
∇f2 (x , y )= −20

0 −2
−1
−2

  0 0

−6 0
1
y x
∗ ∗
2 2
2
∇ f2 (x , y )= ≺0
0 −5 f2 (x, y ) = −3x 2 − 2.5y 2

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 18/43
Examples stationary points: saddle points and other
x∗
  
0
Set x∗ = = .
y∗ 0
Stationary point and a saddle point
10

z
0
  −10

0
∇f3 (x ∗ , y ∗ )= −2

0 0 −2

  y 0
2

2 ∗ ∗ 6 0 2 x

∇ f3 (x , y )= ⊁ 0 nor ⊀ 0
0 −5 f3 (x, y ) = 3x 2 − 2.5y 2

Other stationary point 10

z
0
  −10

∗ ∗ 0
∇f4 (x , y )= −2

0 0 −2
   
2 ∗ ∗ 6 0 6 0 y 0

∇ f4 (x , y )= ⪰0
2
= 2 x
0 −10y ∗ 0 0
f4 (x, y ) = 3x 2 − 2.5y 3

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 19/43
Classification of stationary points using the Hessian: ∇2 f (x∗ )

Let f ∈ C 2 (Rn , R) and x∗ ∈ Rn such that ∇f (x∗ ) = 0.


Since ∇2 f (x∗ ) is symmetric one has:
∇2 f (x∗ ) = UΛU T
with Λ = diag(λ1 , . . . , λn ) eigenvalues of ∇2 f (x∗ ).
We can distinguish the following cases:
▶ λi ̸= 0 for any 1 ≤ i ≤ n
▶ λi > 0 for any 1 ≤ i ≤ n ⇒ ∇2 f (x∗ ) ≻ 0 ⇒ x∗ local minimum

▶ λi < 0 for any 1 ≤ i ≤ n ⇒ ∇2 f (x∗ ) ≺ 0 ⇒ x∗ local maximum

▶ λi > 0 and λj < 0 for some 1 ≤ i ̸= j ≤ n ⇒ x∗ saddle point

▶ λi = 0 for some 1 ≤ i ≤ n ⇒ we cannot conclude using ∇2 f (x∗ )

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 20/43
What happens if λi = 0 for some 1 ≤ i ≤ n

0 50

50

z
z

z
−50 0

−2 −2 −2 −2 −2 −2
−1 −1 −1
0 0 0 0 0 0
1 1 1
y 2 2 x y 2 2 x y 2 2 x

4 4 4 4 44
f5 (x, y ) = 3x + 2.5y
 f6 (x, y ) = −3x − 2.5y
 f7 (x, y ) = 3x − 2.5y

0 0 0
∇f5 (0, 0) = ∇f6 (0, 0) = ∇f7 (0, 0) =
 0  0  0
0 0 0 0 0 0
∇2 f5 (0, 0) = ⪰0 ∇2 f6 (0, 0) = ⪰0 ∇2 f7 (0, 0) = ⪰0
0 0 0 0 0 0

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 21/43
How to compute analytically and classify stationary points ?

min f (x) (Punc )


x∈Rn

1. Find all x∗ ∈ Rn such that ∇f (x∗ ) = 0 (stationary points)

2. Compute ∇2 f (x∗ ) and its eigenvalues for all stationary points x∗


3. Use slide 20 to decide if
▶ x∗ is a local minimum
▶ x∗ is a local maximum
▶ x∗ is a saddle point
▶ we cannot conclude

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 22/43
Sufficient optimality conditions in the convex case

min f (x) (Punc )


x∈Rn

Minima of a convex function


Let x∗ ∈ Rn . If f is convex (i.e. ∇2 f (x) ⪰ 0 for any x ∈ Rn ) then:
x∗ local solution of (Punc ) if and only if x∗ global solution of (Punc ).

First-order convex sufficient optimality condition


Let x∗ ∈ Rn . If f ∈ C 1 (Rn , R) is convex and x∗ satisfies:
∇f (x∗ ) = 0,
then x∗ is a local (and thus global) solution of (Punc ).
Remark
For f is convex, x∗ stationary point of f ⇒ x∗ global minimum of f

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 23/43
Unconstrained convex quadratic optimization

Let S ∈ Mn (R) be symmetric and c ∈ Rn . The following quadratic


problem can be defined:

1
minn f (x) = xT Sx − cT x. (Pquad )
x∈R 2
Properties
▶ ∀x ∈ Rn , ∇f (x) = Sx − c and ∇2 f (x) = S.
▶ f is convex ⇐⇒ S ⪰ 0.
▶ Let x∗ ∈ Rn . If S ⪰ 0 then:
x∗ ∈ Rn is a global solution of (Pquad ) ⇐⇒ Sx∗ = c.
▶ If S ≻ 0 then (Pquad ) has a unique global solution x∗ = S −1 c.

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 24/43
Linear least squares

Let R ∈ Mn×p (R) and y ∈ Rn . The linear least squares problem can be
defined as follows:
1
minp f (x) = ∥Rx − y∥2 . (Pls )
x∈R 2
Properties
▶ (Pls ) is a special case of (Pquad ) with S = R T R and c = R T y.
▶ ∀x ∈ Rn , ∇f (x) = R T Rx − R T y and ∇2 f (x) = R T R ⪰ 0.
▶ x∗ ∈ Rn is a global solution of (Pls ) ⇐⇒ R T Rx∗ = R T y.
▶ If R has full rank then (Pls ) has a unique global solution
x∗ = (R T R)−1 R T y.

Introduction to Optimization | Optimality conditions for unconstrained optimization | March 11th | Slide 25/43
Numerical Optimization

Introduction to Optimization | Numerical Optimization | March 11th | Slide 26/43


Concepts of Numerical Optimization

We aim to develop an algorithm in order to find a solution of (Punc )


(global or local).

min f (x) (Punc )


x∈Rn
Two ”naive” methods that only work in certain cases:

▶ ”Enumerate” all possibilities by discretization


⇒ fails very quickly as n increases
▶ Find all stationary points of f , i.e., points x such that ∇f (x) = 0,
and choose the one with the smallest cost
⇒ potentially an infinite number of solutions or no explicit solutions

Introduction to Optimization | Numerical Optimization | March 11th | Slide 27/43


Iterative Optimization Algorithms
General scheme
▶ We initialize the algorithm with x0 ∈ Rn
▶ For k ≥ 0,
▶ Start from an iterate xk
▶ Choose a ”good” direction dk
▶ Move ”sufficiently far” in the direction dk to a new point xk+1
▶ We interrupt the algorithm when a certain stopping criterion is
satisfied

Introduction to Optimization | Numerical Optimization | March 11th | Slide 28/43


What do we expect from these methods?

min f (x) (Punc )


x∈Rn

In order to solve (Punc ), we hope to achieve one of the following:


▶ The iterates should get close to a solution;
▶ The function values should get close to the optimum;
▶ The optimality conditions should get close to being satisfied.

Introduction to Optimization | Numerical Optimization | March 11th | Slide 29/43


Associated convergence notions

Convergence of Iterates to a Solution

xk → x∗ as k → +∞
where x∗ ∈ argminx∈Rn f (x)
Convergence of Costs of Iterates to the Optimal Value

f (xk ) → f ∗ as k → +∞
where f∗ = minx∈Rn f (x)
Convergence to a Stationary Point

∇f (xk ) → 0 as k → +∞
if f is differentiable

Introduction to Optimization | Numerical Optimization | March 11th | Slide 30/43


Why these conditions?

In practice:
▶ We do not know the optimal solution(s) x∗ ;
▶ We do not know the optimal value f ∗ = f (x∗ ).

From an algorithmic standpoint:


▶ We can measure the behavior of the iterates;
▶ We can evaluate the objective function and try to decrease it
iteratively;
▶ We can evaluate/estimate the gradient and decrease its norm to
zero.

Introduction to Optimization | Numerical Optimization | March 11th | Slide 31/43


Descent direction methods

Introduction to Optimization | Descent direction methods | March 11th | Slide 32/43


Descent Directions
Definition
Let x ∈ Rn , d ∈ Rn \{0}, and f ∈: C 1 (Rn , R).
We say that d is a descent direction at x if:
⟨∇f (x), d⟩ < 0,
where ⟨·, ·⟩ is the canonical scalar product on Rn .

Property
If d ∈ Rn \{0} is a descent direction of f at x, then there exists ᾱ > 0
such that ∀α ∈ (0, ᾱ]:
f (x + αd) < f (x). (3)

Remark
From Taylor expansion, we have
f (x + αd) = f (x) + α(∇f (x)T d + ϵ(d))
where f : Rn → R, d ∈ Rn , and α > 0.
Introduction to Optimization | Descent direction methods | March 11th | Slide 33/43
Descent Directions

Figure: All possible descent directions at xk (gray area)

Examples:
▶ d = −∇f (x) (steepest descent)

▶ d = −A∇f (x) with A ≻ 0


Introduction to Optimization | Descent direction methods | March 11th | Slide 34/43
Typical Scheme of a Descent Direction Method

▶ We choose an initial iterate x0 ∈ Rn , an initial step α0 > 0, a


threshold ϵ > 0, and a maximum number of iterations kmax
▶ For k ≥ 0, assuming that the iterate xk is available, then
1. We perform a stopping test
(e.g., k = kmax or ∥∇f (xk )∥ ≤ ϵ ⇒ STOP)
2. We choose a descent direction dk
3. We determine a stepsize αk > 0 to decrease f along dk (e.g.,
f (xk + αk dk ) < f (xk ))
4. We define the new iterate xk+1 = xk + αk dk

▶ We move to the next iteration k ← k + 1

Introduction to Optimization | Descent direction methods | March 11th | Slide 35/43


Negative gradient direction

min f (x) (Punc )


x∈Rn

Consider any x ∈ Rn .Then one of the two assertions below holds:


▶ Either ∇f (x) = 0;
▶ Or d = −∇f (x) is a descent direction of f at x
⇒ the function f decreases locally around x in the direction of
−∇f (x).

Introduction to Optimization | Descent direction methods | March 11th | Slide 36/43


Gradient descent method

▶ x0 ∈ Rn , α0 > 0, ϵ > 0, kmax ∈ N


▶ For k ≥ 0, assuming that the iterate xk is available, then
1. We perform a stopping test
(e.g., k = kmax or ∥∇f (xk )∥ ≤ ϵ ⇒ STOP)
2. We choose dk = −∇f(xk )
3. We determine a stepsize αk > 0
4. We define the new iterate xk+1 = xk + αk dk

▶ We move to the next iteration k ← k + 1


Remark
If ∇f (xk ) ̸= 0, then dk is always a descent direction

Introduction to Optimization | Descent direction methods | March 11th | Slide 37/43


Linesearch: choosing the Step Size of the Algorithm

Constant Step Size


We fix αk ≡ α > 0.
▶ No guarantee that f decreases at each iteration
▶ The choice of α is not straightforward in practice

Optimal Step Size


We choose αk such that:
αk = argmin f (xk + αdk )
α>0

▶ Optimal decrease at each iteration


▶ One-dimensional optimization problem to solve at each iteration

Introduction to Optimization | Descent direction methods | March 11th | Slide 38/43


Gradient Algorithm: Example

Figure: Gradient method in a favorable case

Introduction to Optimization | Descent direction methods | March 11th | Slide 39/43


Gradient Algorithm: Example

Figure: Gradient method with optimal stepsize applied to the Rosenbrock


banana function: f (x1 , x2 ) = (1 − x1 )2 + 100(x2 − x12 )2

Introduction to Optimization | Descent direction methods | March 11th | Slide 40/43


Newton’s Method

min f (x) (Punc )


x∈Rn

Quadratic approximation for f ∈ C 2


Given an iterate xk ∈ Rn , the next iterate xk+1 is chosen to minimize a
second-order Taylor approximation of f around xk :
1
f (x) ≈ q(x) = f (xk ) + ∇f (xk )T (x − xk ) + (x − xk )T ∇2 f (xk )(x − xk )
| {z2 }
Quadratic approximation of f around xk

▶ xk+1 = arg minx∈Rn q(x) if ∇2 f (xk ) ≻ 0 .


▶ However, xk+1 may not be defined when f is not convex.

Introduction to Optimization | Descent direction methods | March 11th | Slide 41/43


Newton’s Method

If we set xk+1 = xk + dk , then:


 
1
dk = arg minn f (xk ) + ∇f (xk )T d + dT ∇2 f (xk )d
d∈R 2

▶ However, dk is defined only when ∇2 f (xk ) ≻ 0.


▶ In this case: dk = −(∇2 f (xk ))−1 ∇f (xk ).

Introduction to Optimization | Descent direction methods | March 11th | Slide 42/43


Newton’s Method: Algorithm

▶ x0 ∈ Rn , α0 > 0, ϵ > 0, kmax ∈ N


▶ For k ≥ 0, assuming that the iterate xk is available, then
1. We perform a stopping test
(e.g., k = kmax or ∥∇f (xk )∥ ≤ ϵ ⇒ STOP)
2. We choose dk = −∇2 f(xk )−1 ∇f(xk )
3. We determine a step αk > 0 (e.g., αk ≡ 1)
4. We define the new iterate xk+1 = xk + αk dk

▶ We move to the next iteration k ← k + 1


Remarks
▶ Here, dk is a descent direction if ∇2 f (xk ) ≻ 0
▶ In general, Newton’s methods converge (locally) faster than gradient
methods but are more expensive (gradient vs. gradient + Hessian)
▶ MATLAB Example: Link

Introduction to Optimization | Descent direction methods | March 11th | Slide 43/43

You might also like