0% found this document useful (0 votes)
20 views80 pages

Mpro 2

Uploaded by

pokadoc289
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views80 pages

Mpro 2

Uploaded by

pokadoc289
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Robust Optimization:
Static Case

V. Leclère (ENPC)

November 24, 2023

V. Leclère Robust Optimization: Static Case November 24, 2023 1 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 1 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 1 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An optimization problem

A generic optimization problem can be written

min L(x)
x
s.t. g (x) ≤ 0

where
x is the decision variable
L is the objective function
g is the constraint function

V. Leclère Robust Optimization: Static Case November 24, 2023 2 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An optimization problem with uncertainty


Adding uncertainty ξ in the mix
˜
min L(x, ξ)
x
˜ ≤0
s.t. g (x, ξ)
Remarks:
ξ˜ is unknown. Two main way of modelling it:
ξ˜ ∈ R with a known uncertainty set R, and a pessimistic approach. This is the
robust optimization approach (RO).
ξ˜ is a random variable with known probability law. This is the Stochastic
Programming approach (SP).
Cost is not well defined.
RO: max
 ξ∈R L(x,
 ξ).
SP: E L(x, ξ) .
Constraints are not well defined.
RO: g (x, ξ) ≤ 0, ∀ξ ∈ R.
SP: g (x, ξ) ≤ 0, P − a.s..
V. Leclère Robust Optimization: Static Case November 24, 2023 3 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An optimization problem with uncertainty


Adding uncertainty ξ in the mix
˜
min L(x, ξ)
x
˜ ≤0
s.t. g (x, ξ)
Remarks:
ξ˜ is unknown. Two main way of modelling it:
ξ˜ ∈ R with a known uncertainty set R, and a pessimistic approach. This is the
robust optimization approach (RO).
ξ˜ is a random variable with known probability law. This is the Stochastic
Programming approach (SP).
Cost is not well defined.
RO: max
 ξ∈R L(x,
 ξ).
SP: E L(x, ξ) .
Constraints are not well defined.
RO: g (x, ξ) ≤ 0, ∀ξ ∈ R.
SP: g (x, ξ) ≤ 0, P − a.s..
V. Leclère Robust Optimization: Static Case November 24, 2023 3 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An optimization problem with uncertainty


Adding uncertainty ξ in the mix
˜
min L(x, ξ)
x
˜ ≤0
s.t. g (x, ξ)
Remarks:
ξ˜ is unknown. Two main way of modelling it:
ξ˜ ∈ R with a known uncertainty set R, and a pessimistic approach. This is the
robust optimization approach (RO).
ξ˜ is a random variable with known probability law. This is the Stochastic
Programming approach (SP).
Cost is not well defined.
RO: max
 ξ∈R L(x,
 ξ).
SP: E L(x, ξ) .
Constraints are not well defined.
RO: g (x, ξ) ≤ 0, ∀ξ ∈ R.
SP: g (x, ξ) ≤ 0, P − a.s..
V. Leclère Robust Optimization: Static Case November 24, 2023 3 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Requirements and limits


Stochastic optimization:
requires a law of the uncertainty ξ
can be hard to solve (generally require discretizing the support and blowing up the
dimension of the problem)
there exists specific methods (like Bender’s decomposition)
Robust optimization:
requires an uncertainty set R
can be overly conservative, even for reasonable R
complexity strongly depend on the choice of R
Distributionally robust optimization:
is a mix between robust and stochastic optimization
consists in solving a stochastic optimization problem where the law is chosen in a
robust way
is a fast growing fields with multiple recent results
but is still hard to implement than other approaches
V. Leclère Robust Optimization: Static Case November 24, 2023 4 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Requirements and limits


Stochastic optimization:
requires a law of the uncertainty ξ
can be hard to solve (generally require discretizing the support and blowing up the
dimension of the problem)
there exists specific methods (like Bender’s decomposition)
Robust optimization:
requires an uncertainty set R
can be overly conservative, even for reasonable R
complexity strongly depend on the choice of R
Distributionally robust optimization:
is a mix between robust and stochastic optimization
consists in solving a stochastic optimization problem where the law is chosen in a
robust way
is a fast growing fields with multiple recent results
but is still hard to implement than other approaches
V. Leclère Robust Optimization: Static Case November 24, 2023 4 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Requirements and limits


Stochastic optimization:
requires a law of the uncertainty ξ
can be hard to solve (generally require discretizing the support and blowing up the
dimension of the problem)
there exists specific methods (like Bender’s decomposition)
Robust optimization:
requires an uncertainty set R
can be overly conservative, even for reasonable R
complexity strongly depend on the choice of R
Distributionally robust optimization:
is a mix between robust and stochastic optimization
consists in solving a stochastic optimization problem where the law is chosen in a
robust way
is a fast growing fields with multiple recent results
but is still hard to implement than other approaches
V. Leclère Robust Optimization: Static Case November 24, 2023 4 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 4 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Some numerical tests on real-life LPs

From Ben-Tal and Nemirovski


take LP from Netlib library
look at non-integer coefficients, assuming that they are not known with perfect
certainty
What happens if you change them by 0.1% ?
constraints can be violated by up to 450%
P(violation > 0) = 0.5
P(violation > 150%) = 0.18
E[violation] = 125%

V. Leclère Robust Optimization: Static Case November 24, 2023 5 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

What do you want from robust optimization ?

finding a solution that is less sensible to modified data, without a great increase of
price
choosing an uncertainty set R that:
offer robustness guarantee
yield an easily solved optimization problem

V. Leclère Robust Optimization: Static Case November 24, 2023 6 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 6 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving a robust optimization problem


The robust optimization problem we want to solve is1

min L(x)
x
s.t. g (x, ξ) ≤ 0 ∀ξ ∈ R

Two main approaches are possible:


Constraint generation: replace R by a finite set of ξ, that is we replace an ”infinite
number of contraints” by a finite number of them.
Reformulation: replace g (x, ξ) ≤ 0 ∀ξ ∈ R,
by sup g (x, ξ) ≤ 0,
ξ∈R
then explicit the sup.
1
For simplicity reason we dropped w.l.o.g. the uncertainty in the objective.
V. Leclère Robust Optimization: Static Case November 24, 2023 7 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving a robust optimization problem


The robust optimization problem we want to solve is1

min L(x)
x
s.t. g (x, ξ) ≤ 0 ∀ξ ∈ R

Two main approaches are possible:


Constraint generation: replace R by a finite set of ξ, that is we replace an ”infinite
number of contraints” by a finite number of them.
Reformulation: replace g (x, ξ) ≤ 0 ∀ξ ∈ R,
by sup g (x, ξ) ≤ 0,
ξ∈R
then explicit the sup.
1
For simplicity reason we dropped w.l.o.g. the uncertainty in the objective.
V. Leclère Robust Optimization: Static Case November 24, 2023 7 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving a robust optimization problem


The robust optimization problem we want to solve is1

min L(x)
x
s.t. g (x, ξ) ≤ 0 ∀ξ ∈ R

Two main approaches are possible:


Constraint generation: replace R by a finite set of ξ, that is we replace an ”infinite
number of contraints” by a finite number of them.
Reformulation: replace g (x, ξ) ≤ 0 ∀ξ ∈ R,
by sup g (x, ξ) ≤ 0,
ξ∈R
then explicit the sup.
1
For simplicity reason we dropped w.l.o.g. the uncertainty in the objective.
V. Leclère Robust Optimization: Static Case November 24, 2023 7 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving a robust optimization problem


The robust optimization problem we want to solve is1

min L(x)
x
s.t. g (x, ξ) ≤ 0 ∀ξ ∈ R

Two main approaches are possible:


Constraint generation: replace R by a finite set of ξ, that is we replace an ”infinite
number of contraints” by a finite number of them.
Reformulation: replace g (x, ξ) ≤ 0 ∀ξ ∈ R,
by sup g (x, ξ) ≤ 0,
ξ∈R
then explicit the sup.
1
For simplicity reason we dropped w.l.o.g. the uncertainty in the objective.
V. Leclère Robust Optimization: Static Case November 24, 2023 7 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Constraint generation algorithm


Data: Problem parameters, reference uncertainty ξ0
Result: approximate value with gap;
for k ∈ N do
; xk ;

solve ṽ = min L(x) | g (x, ξκ ) ∀κ ≤ k
x
solve s = max g (xk , ξ) ; ξk+1 ;
ξ∈R
if s ≤ 0 then
Robust optimization problem solved,
with value ṽ and optimal solution xk
Algorithm 1: Constraint Generation Algorithm

Note that we are solving a problem similar to the deterministic problem with an
increasing number of constraints.

This is easy to implement and can be numerically efficient.


V. Leclère Robust Optimization: Static Case November 24, 2023 8 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Constraint generation algorithm


Data: Problem parameters, reference uncertainty ξ0
Result: approximate value with gap;
for k ∈ N do
; xk ;

solve ṽ = min L(x) | g (x, ξκ ) ∀κ ≤ k
x
solve s = max g (xk , ξ) ; ξk+1 ;
ξ∈R
if s ≤ 0 then
Robust optimization problem solved,
with value ṽ and optimal solution xk
Algorithm 1: Constraint Generation Algorithm

Note that we are solving a problem similar to the deterministic problem with an
increasing number of constraints.

This is easy to implement and can be numerically efficient.


V. Leclère Robust Optimization: Static Case November 24, 2023 8 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Constraint generation algorithm


Data: Problem parameters, reference uncertainty ξ0
Result: approximate value with gap;
for k ∈ N do
; xk ;

solve ṽ = min L(x) | g (x, ξκ ) ∀κ ≤ k
x
solve s = max g (xk , ξ) ; ξk+1 ;
ξ∈R
if s ≤ 0 then
Robust optimization problem solved,
with value ṽ and optimal solution xk
Algorithm 1: Constraint Generation Algorithm

Note that we are solving a problem similar to the deterministic problem with an
increasing number of constraints.

This is easy to implement and can be numerically efficient.


V. Leclère Robust Optimization: Static Case November 24, 2023 8 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Reformulation principle
We can write the robust optimization problem as
min L(x)
x
s.t. sup g (x, ξ) ≤ 0
ξ∈R

Now, there are two ways of simplifying this problem:

we can explicitly compute ḡ (x) = sup g (x, ξ);


ξ∈R

by duality we can write sup g (x, ξ) = min h(x, η)


ξ∈R η∈Q
➥ min h(x, η) ≤ 0 is equivalent to ∃η such that h(x, η) ≤ 0, i.e. just add η as a
η∈Q
variable in your optimization problem
V. Leclère Robust Optimization: Static Case November 24, 2023 9 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Reformulation principle
We can write the robust optimization problem as
min L(x)
x
s.t. sup g (x, ξ) ≤ 0
ξ∈R

Now, there are two ways of simplifying this problem:

we can explicitly compute ḡ (x) = sup g (x, ξ);


ξ∈R

by duality we can write sup g (x, ξ) = min h(x, η)


ξ∈R η∈Q
➥ min h(x, η) ≤ 0 is equivalent to ∃η such that h(x, η) ≤ 0, i.e. just add η as a
η∈Q
variable in your optimization problem
V. Leclère Robust Optimization: Static Case November 24, 2023 9 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 9 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 9 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem I


We consider
min max c ⊤x
x≥0 (A,b,c)∈R

s.t. Ax ≤ b
Without loss of generality we can consider a deterministic cost:
min θ
x≥0,θ

s.t. Ax ≤ b ∀(A, b, c) ∈ R
c ⊤x ≤ θ ∀(A, b, c) ∈ R
That can be written as
min θ
x≥0,θ

s.t. ai ⊤ x − bi ≤ 0 ∀(A, b, c) ∈ R, ∀i ∈ [m]


V. Leclère ⊤ Robust Optimization: Static Case November 24, 2023 10 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem I


We consider
min max c ⊤x
x≥0 (A,b,c)∈R

s.t. Ax ≤ b
Without loss of generality we can consider a deterministic cost:
min θ
x≥0,θ

s.t. Ax ≤ b ∀(A, b, c) ∈ R
c ⊤x ≤ θ ∀(A, b, c) ∈ R
That can be written as
min θ
x≥0,θ

s.t. ai ⊤ x − bi ≤ 0 ∀(A, b, c) ∈ R, ∀i ∈ [m]


V. Leclère ⊤ Robust Optimization: Static Case November 24, 2023 10 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem I


We consider
min max c ⊤x
x≥0 (A,b,c)∈R

s.t. Ax ≤ b
Without loss of generality we can consider a deterministic cost:
min θ
x≥0,θ

s.t. Ax ≤ b ∀(A, b, c) ∈ R
c ⊤x ≤ θ ∀(A, b, c) ∈ R
That can be written as
min θ
x≥0,θ

s.t. ai ⊤ x − bi ≤ 0 ∀(A, b, c) ∈ R, ∀i ∈ [m]


V. Leclère ⊤ Robust Optimization: Static Case November 24, 2023 10 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem II


We now consider

min c ⊤ x
x≥0

s.t. ai ⊤ x − bi ≤ 0 ∀(A, b) ∈ R, ∀i ∈ [m]

Let Ri be the projection of R onto coordinate i.


We have in particular R ⊂ R1 × · · · × Rm .
But note that, in the robust constraint, R can be replaced by R1 × · · · × Rm , indeed,

fi (x, ξi ) ≤ 0, ∀i ∈ [m], ∀ξ ∈ R
⇐⇒ fi (x, ξi ) ≤ 0, ∀i ∈ [m], ∀ξ ∈ R1 × · · · × Rm
⇐⇒ fi (x, ξi ) ≤ 0, ∀ξi , ∈ Ri ∀i ∈ [m]

V. Leclère Robust Optimization: Static Case November 24, 2023 11 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem II


We now consider

min c ⊤ x
x≥0

s.t. ai ⊤ x − bi ≤ 0 ∀(A, b) ∈ R, ∀i ∈ [m]

Let Ri be the projection of R onto coordinate i.


We have in particular R ⊂ R1 × · · · × Rm .
But note that, in the robust constraint, R can be replaced by R1 × · · · × Rm , indeed,

fi (x, ξi ) ≤ 0, ∀i ∈ [m], ∀ξ ∈ R
⇐⇒ fi (x, ξi ) ≤ 0, ∀i ∈ [m], ∀ξ ∈ R1 × · · · × Rm
⇐⇒ fi (x, ξi ) ≤ 0, ∀ξi , ∈ Ri ∀i ∈ [m]

V. Leclère Robust Optimization: Static Case November 24, 2023 11 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem II


We now consider

min c ⊤ x
x≥0

s.t. ai ⊤ x − bi ≤ 0 ∀(A, b) ∈ R, ∀i ∈ [m]

Let Ri be the projection of R onto coordinate i.


We have in particular R ⊂ R1 × · · · × Rm .
But note that, in the robust constraint, R can be replaced by R1 × · · · × Rm , indeed,

fi (x, ξi ) ≤ 0, ∀i ∈ [m], ∀ξ ∈ R
⇐⇒ fi (x, ξi ) ≤ 0, ∀i ∈ [m], ∀ξ ∈ R1 × · · · × Rm
⇐⇒ fi (x, ξi ) ≤ 0, ∀ξi , ∈ Ri ∀i ∈ [m]

V. Leclère Robust Optimization: Static Case November 24, 2023 11 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem III


We now consider

min c ⊤ x
x≥0

s.t. ai ⊤ x − b i ≤ 0 ∀(ai , b i ) ∈ R i , ∀i ∈ [m]

V. Leclère Robust Optimization: Static Case November 24, 2023 12 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem III


We now consider

min c ⊤ x
x≥0

s.t. a ⊤ x − b ≤ 0 ∀(a , b ) ∈ R ,

V. Leclère Robust Optimization: Static Case November 24, 2023 12 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem III


We now consider

min c ⊤ x
x≥0

s.t. a ⊤ x − b ≤ 0 ∀(a , b ) ∈ R ,

To model correlation we set

a = ā + Pζ b = b̄ + p ⊤ ζ

where (ā, b̄) are the nominal value, and ζ is the primitive/residual uncertainty.

V. Leclère Robust Optimization: Static Case November 24, 2023 12 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem III


We now consider

min c ⊤ x
x≥0

s.t. a ⊤ x − b ≤ 0 ∀(a , b ) ∈ R ,

To model correlation we set

a = ā + Pζ b = b̄ + p ⊤ ζ

where (ā, b̄) are the nominal value, and ζ is the primitive/residual uncertainty.
The robust constraint now reads

(ā⊤ x − b̄) + (P ⊤ x − p)⊤ ζ ≤ 0 ∀ζ ∈ Z

V. Leclère Robust Optimization: Static Case November 24, 2023 12 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem IV

Example: assume that a is a random variable with mean ā and covariance Σ. Then, a
natural reformulation would be
a = ā + Σ1/2 ζ,
so that ζ is centered with uncorrelated coordinates.

Finally, w.l.o.g. we assume that b is deterministic (can be obtained by adding a


variable xn+1 constrained to be equal to 1).

V. Leclère Robust Optimization: Static Case November 24, 2023 13 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Canonization of the problem IV

Example: assume that a is a random variable with mean ā and covariance Σ. Then, a
natural reformulation would be
a = ā + Σ1/2 ζ,
so that ζ is centered with uncorrelated coordinates.

Finally, w.l.o.g. we assume that b is deterministic (can be obtained by adding a


variable xn+1 constrained to be equal to 1).

V. Leclère Robust Optimization: Static Case November 24, 2023 13 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 13 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An explicit worst case value


We consider an ellipsoidal uncertainty set
n  o
R = ξ = ā + Pζ i
| ∥ζ∥2 ≤ ρ

Here we can, for a given x, explicitly compute


sup ξ ⊤ x = ā⊤ x + sup (Pζ)⊤ x
ξ∈R ∥ζ∥2 ≤ρ

= ā x + ρ∥P ⊤ x∥2

Hence, constraint
sup ξ ⊤ x ≤ b
ξ∈R
can be written
ā⊤ x + ρ∥P ⊤ x∥2 ≤ b
V. Leclère Robust Optimization: Static Case November 24, 2023 14 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An explicit worst case value


We consider an ellipsoidal uncertainty set
n  o
R = ξ = ā + Pζ i
| ∥ζ∥2 ≤ ρ

Here we can, for a given x, explicitly compute


sup ξ ⊤ x = ā⊤ x + sup (Pζ)⊤ x
ξ∈R ∥ζ∥2 ≤ρ

= ā x + ρ∥P ⊤ x∥2

Hence, constraint
sup ξ ⊤ x ≤ b
ξ∈R
can be written
ā⊤ x + ρ∥P ⊤ x∥2 ≤ b
V. Leclère Robust Optimization: Static Case November 24, 2023 14 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

An explicit worst case value


We consider an ellipsoidal uncertainty set
n  o
R = ξ = ā + Pζ i
| ∥ζ∥2 ≤ ρ

Here we can, for a given x, explicitly compute


sup ξ ⊤ x = ā⊤ x + sup (Pζ)⊤ x
ξ∈R ∥ζ∥2 ≤ρ

= ā x + ρ∥P ⊤ x∥2

Hence, constraint
sup ξ ⊤ x ≤ b
ξ∈R
can be written
ā⊤ x + ρ∥P ⊤ x∥2 ≤ b
V. Leclère Robust Optimization: Static Case November 24, 2023 14 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

SOCP problem

An Second Order Cone Programming constraint is a constraint of the form

∥Ax + b∥2 ≤ c ⊤ x + d

An SOCP problem is a (continuous) optimization problem with linear cost and


linear and SOCP constraints
There exists powerful software to solve SOCP (e.g. CPLEX, Gurobi, MOSEK...)
with dedicated interior points methods
There exist a duality theory akin to the LP duality theory
If a robust optimization problem can be cast as an SOCP the formulation is
deemed efficient

V. Leclère Robust Optimization: Static Case November 24, 2023 15 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 15 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Linear duality: recalls


Recall that, if finite,
max ξ⊤x
ξ

s.t. Dξ ≤ d
as the same value as
min η ⊤ d
η

s.t. η⊤ D = x
η≥0

Thus,
sup ξ ⊤ x ≤ b ⇐⇒ min η⊤ d ≤ b
ξ:Dξ≤d η≥0:η ⊤ D=x

⇐⇒ ∃η ≥ 0, η ⊤ D = x, η⊤ d ≤ b
V. Leclère Robust Optimization: Static Case November 24, 2023 16 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Linear duality: recalls


Recall that, if finite,
max ξ⊤x
ξ

s.t. Dξ ≤ d
as the same value as
min η ⊤ d
η

s.t. η⊤ D = x
η≥0

Thus,
sup ξ ⊤ x ≤ b ⇐⇒ min η⊤ d ≤ b
ξ:Dξ≤d η≥0:η ⊤ D=x

⇐⇒ ∃η ≥ 0, η ⊤ D = x, η⊤ d ≤ b
V. Leclère Robust Optimization: Static Case November 24, 2023 16 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Polyhedral uncertainty
We consider a polyhedral uncertainty set
n o
R= a | Da ≤ d

Then the robust optimization problem


min c ⊤x
x≥0

s.t. sup a⊤ x ≤ b
a∈R

reads
min c ⊤x
x≥0,η≥0

s.t. η⊤ d ≤ b
η⊤ D = x

V. Leclère Robust Optimization: Static Case November 24, 2023 17 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 17 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Soyster model
The problem
min c ⊤x
x
Ãx ≤ b ∀Ã ∈ R

x ≤ x ≤ x̄
where each coefficient ãij ∈ [āij − δij , āij + δij ]

V. Leclère Robust Optimization: Static Case November 24, 2023 18 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Soyster model
The problem
min c ⊤x
x
sup Ãx ≤ b
Ã∈R
x ≤ x ≤ x̄
where each coefficient ãij ∈ [āij − δij , āij + δij ]

V. Leclère Robust Optimization: Static Case November 24, 2023 18 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Soyster model
The problem
min c ⊤x
x
sup Ãx ≤ b
Ã∈R
x ≤ x ≤ x̄
where each coefficient ãij ∈ [āij − δij , āij + δij ]
can be written
min c ⊤x
x
X X
āij xj + δij |xj | ≤ bi ∀i
j j

x ≤ x ≤ x̄

V. Leclère Robust Optimization: Static Case November 24, 2023 18 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Soyster model
The problem
min c ⊤x
x
sup Ãx ≤ b
Ã∈R
x ≤ x ≤ x̄
where each coefficient ãij ∈ [āij − δij , āij + δij ]
can be written
min c ⊤x
x
X X
āij xj + δij yj ≤ bi ∀i
j j

x ≤ x ≤ x̄
yj ≥ xj , yj ≥ −xj
V. Leclère Robust Optimization: Static Case November 24, 2023 18 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP I

Soyster’s model is over conservative, we want to consider a model where only Γi


coefficient per line have non-zero errors, leading to

min c ⊤x
x,y
X X
āij xj + max δij yj ≤ bi ∀i
Si :|Si |=Γi
j j∈Si

x ≤ x ≤ x̄
yj ≥ xj , yj ≥ −xj

V. Leclère Robust Optimization: Static Case November 24, 2023 19 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP I

Soyster’s model is over conservative, we want to consider a model where only Γi


coefficient per line have non-zero errors, leading to

min c ⊤ x
x,y
X
āij xj + βi ≤ bi ∀i
j
X
max δij yj ≤ βi
Si :|Si |=Γi
j∈Si

x ≤ x ≤ x̄
yj ≥ xj , yj ≥ −xj

V. Leclère Robust Optimization: Static Case November 24, 2023 19 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP II

This means that, for line i we take a margin of


X
βi (x, Γi ) := max δij |xj |
Si :|Si |=Γi
j∈Si

which can be obtained as


X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi
j

zij ∈ {0, 1}

V. Leclère Robust Optimization: Static Case November 24, 2023 20 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP II

This means that, for line i we take a margin of


X
βi (x, Γi ) := max δij |xj |
Si :|Si |=Γi
j∈Si

which can be obtained as


X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi
j

zij ≤ 1

V. Leclère Robust Optimization: Static Case November 24, 2023 20 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP II

This means that, for line i we take a margin of


X
βi (x, Γi ) := max δij |xj |
Si :|Si |=Γi
j∈Si

which can be obtained as


X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi [λi ]
j

zij ≤ 1 [µij ]

This LP can be then dualized to be integrated in the original LP.

V. Leclère Robust Optimization: Static Case November 24, 2023 20 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP III

X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi [λi ]
j

zij ≤ 1 [µij ]

V. Leclère Robust Optimization: Static Case November 24, 2023 21 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP III

X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi [λi ]
j

zij ≤ 1 [µij ]

X  X  X  
βi (x, Γi ) = max min δij |xj |zij + λi Γi − zij + µij 1 − zij
z≥0 λ,µ≥0
j j j

V. Leclère Robust Optimization: Static Case November 24, 2023 21 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP III

X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi [λi ]
j

zij ≤ 1 [µij ]

X  X  X  
βi (x, Γi ) = max min δij |xj |zij + λi Γi − zij + µij 1 − zij
z≥0 λ,µ≥0
j j j
X X  
= min max λ i Γi + µij + zij δij |xj | − λi − µij
λ,µ≥0 z≥0
j j

V. Leclère Robust Optimization: Static Case November 24, 2023 21 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP III

X
βi (x, Γi ) = max δij |xj |zij
z≥0
j
X
zij ≤ Γi [λi ]
j

zij ≤ 1 [µij ]

X  X  X  
βi (x, Γi ) = max min δij |xj |zij + λi Γi − zij + µij 1 − zij
z≥0 λ,µ≥0
j j j
X
= min λ i Γi + µij
λ,µ≥0
j

s.t. δij |xj | ≤ λi + µij

V. Leclère Robust Optimization: Static Case November 24, 2023 21 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Cardinality constrained LP IV
In the end we obtain

min c ⊤x
x,β,λ,µ
X
āij xj + βi ≤ bi ∀i
j
X
λi Γi + µij ≤ βi ∀i
j

δij xj ≤ λi + µij ∀i, j


− δij xj ≤ λi + µij ∀i, j
λ ≥ 0, µ≥0
x ≤ x ≤ x̄

V. Leclère Robust Optimization: Static Case November 24, 2023 22 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 22 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Robust constraint implying a probabilistic guarantee

Definition
We say that, for a given set of probability measures P ∈ P, the constraint

g (x, ξ) ≤ 0, ∀ξ ∈ R,

implies a probabilistic guarantee of level 1 − ε if, for all P ∈ P,


 
P g (x, ξ) ≤ 0 ≥ 1 − ε.

V. Leclère Robust Optimization: Static Case November 24, 2023 23 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Probability guarantee for ellipsoidal uncertainty

We consider a linear constraint


X
ã ij xj ≤ bi , ∀i ∈ [m]
j

We assume that ã ij = āij (1 + εξ ij ) where ξ ij is a random variable with mean 0,


contained in [−1, 1], and independent in j.
Then the robust constraint
X sX
āij xj + εΩ āij2 xj2 ≤ bi+ , ∀i ∈ [m]
j j

2 /2
implies a probabilistic guarantee of level 1 − e −Ω .

V. Leclère Robust Optimization: Static Case November 24, 2023 24 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 24 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

A combinatorial optimization problem with uncertain cost


We consider a combinatorial optimization problem:

min max c̃ ⊤ x
x∈{0,1}N c̃∈R

s.t. x ∈X

where R is such that each c̃i ∈ [c̄i , c̄i + δi ], with at most Γ coefficient deviating from c̄i .
Thus, the problem reads
X
(P) min c̄ ⊤ x + max δi xi
x∈{0,1}N |S|≤Γ
i∈S
s.t. x ∈ X

wlog we assume that the i are ordered by decreasing cost uncertainty span:
δ1 ≥ δ2 ≥ · · · ≥ δn .
V. Leclère Robust Optimization: Static Case November 24, 2023 25 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

A combinatorial optimization problem with uncertain cost


We consider a combinatorial optimization problem:

min max c̃ ⊤ x
x∈{0,1}N c̃∈R

s.t. x ∈X

where R is such that each c̃i ∈ [c̄i , c̄i + δi ], with at most Γ coefficient deviating from c̄i .
Thus, the problem reads
X
(P) min c̄ ⊤ x + max δi xi
x∈{0,1}N |S|≤Γ
i∈S
s.t. x ∈ X

wlog we assume that the i are ordered by decreasing cost uncertainty span:
δ1 ≥ δ2 ≥ · · · ≥ δn .
V. Leclère Robust Optimization: Static Case November 24, 2023 25 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem I

We can write (P) as


n
X
min max c̄ ⊤ x + δi xi ζi
x∈{0,1}N ζ∈[0,1]n
i=1
s.t. x ∈X
X n
ζi ≤ Γ
i=1

For a given x ∈ X we dualize the inner maximization LP problem

V. Leclère Robust Optimization: Static Case November 24, 2023 26 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem II


Thus we can write (P) as
n
X

min c̄ x + Γθ + yj
x,y ,θ
j=1

s.t. x ∈ X
yj + θ ≥ δj xj
yj , θ ≥ 0

Note that an optimal solution satisfies

yj = (δj xj − θ)+ = (δj − θ)+ xj

as xj ∈ {0, 1}, and θ ≥ 0.


V. Leclère Robust Optimization: Static Case November 24, 2023 27 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem II


Thus we can write (P) as
n
X

min c̄ x + Γθ + yj
x,y ,θ
j=1

s.t. x ∈ X
yj + θ ≥ δj xj
yj , θ ≥ 0

Note that an optimal solution satisfies

yj = (δj xj − θ)+ = (δj − θ)+ xj

as xj ∈ {0, 1}, and θ ≥ 0.


V. Leclère Robust Optimization: Static Case November 24, 2023 27 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem III


Thus we can write (P) as
n
X

min min c̄ x + Γθ + xj (δj − θ)+
θ≥0 x
j=1

s.t. x ∈X

We can now decompose the problem for θ ∈ [δℓ , δℓ−1 ] where δn+1 = 0 and δ0 = +∞.
Therefore, we have
val(P) = min Z ℓ
ℓ∈[n]

where
ℓ−1
X
ℓ ⊤
Z = min c̄ x + Γθ + xj (δj − θ)
x∈X ,θ∈[δℓ ,δℓ−1 ]
j=1

V. Leclère Robust Optimization: Static Case November 24, 2023 28 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem III


Thus we can write (P) as
n
X

min min c̄ x + Γθ + xj (δj − θ)+
θ≥0 x
j=1

s.t. x ∈X

We can now decompose the problem for θ ∈ [δℓ , δℓ−1 ] where δn+1 = 0 and δ0 = +∞.
Therefore, we have
val(P) = min Z ℓ
ℓ∈[n]

where
ℓ−1
X
ℓ ⊤
Z = min c̄ x + Γθ + xj (δj − θ)
x∈X ,θ∈[δℓ ,δℓ−1 ]
j=1

V. Leclère Robust Optimization: Static Case November 24, 2023 28 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem III


Thus we can write (P) as
n
X

min min c̄ x + Γθ + xj (δj − θ)+
θ≥0 x
j=1

s.t. x ∈X

We can now decompose the problem for θ ∈ [δℓ , δℓ−1 ] where δn+1 = 0 and δ0 = +∞.
Therefore, we have
val(P) = min Z ℓ
ℓ∈[n]

where
ℓ−1
X
ℓ ⊤
Z = min c̄ x + Γθ + xj (δj − θ)
x∈X ,θ∈[δℓ ,δℓ−1 ]
j=1

V. Leclère Robust Optimization: Static Case November 24, 2023 28 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Solving the robust combinatorial problem IV

As the problem is linear in θ we have that


ℓ−1
X
ℓ ⊤
Z = min c̄ x + Γθ + xj (δj − θ)
x∈X ,θ∈[δℓ ,δℓ−1 ]
j=1

is attained for θ = δℓ or θ = δℓ−1 .


So in the end, we have
val(P) = min G ℓ
ℓ∈[n]

where
n ℓ
X o
G ℓ = Γδℓ + min c̄ ⊤ x + (δj − δℓ ) xj
x∈X | {z }
j=1
≥0

V. Leclère Robust Optimization: Static Case November 24, 2023 29 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Algorithm for the robust problem

1 For ℓ ∈ [n], solve

n ℓ
X o
G ℓ = Γδℓ + min c̄ ⊤ x + (δi − δℓ )xj
x∈X
i=1

with optimal solution xℓ


2 Set ℓ∗ ∈ arg minℓ∈[n] G ℓ

3 Return val(P) = G ℓ and x ∗ = xℓ

V. Leclère Robust Optimization: Static Case November 24, 2023 30 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Contents
1 Introduction and motivations
How to add uncertainty in an optimization problem
Why shall you do Robust Optimization ?
2 Solving the robust optimization problem
3 Robust optimization for Linear Programm
Reformulating the problem
Ellipsoidal uncertainty set
Polyhedral uncertainty set
Cardinality constrained LP
4 Probability guarantee
5 Robust Combinatorial Problem
6 Conclusion
V. Leclère Robust Optimization: Static Case November 24, 2023 30 / 34
Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Why do robust optimization ?

Because you want to account for some uncertainty


Because you want to have a solution that resists to changes in data
Because your data is unprecise and robustness yield better out-of-sample result
Because you do not have the law of the uncertainty
Because you can control the robustness level
Because your problem is ”one-shot”

V. Leclère Robust Optimization: Static Case November 24, 2023 31 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Which uncertainty set to choose ?

An uncertainty set that is computationally tractable


An uncertainty set that yields good results
An uncertainty set that have some theoretical soundness
An uncertainty set that take available data into account
Select uncertainty set / level through cross-validation

V. Leclère Robust Optimization: Static Case November 24, 2023 32 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

Is there some theoretical results ?

Yes: with some assumption over the randomness (e.g. bounded and symmetric
around ā) some uncertainty set (e.g. ellipsoidal) have a probabilistic guarantee:
 
∀ξ ∈ Rε , g (x, ξ) ≤ 0 =⇒ P g (x, ξ) ≤ 0 ≥ 1 − ε

Yes: in some cases approximation scheme for nominal problem can be extended to
robust problem (e.g. cardinal uncertainty in combinatorial problem)
Yes: using relevant data we can use statistical tools to construct a robust set R
that imply a probabilistic guarantee

V. Leclère Robust Optimization: Static Case November 24, 2023 33 / 34


Introduction Solution approaches Robust LP Probability guarantee Robust combinatorial Conclusion

D. Bertsimas, D. Brown, C. Caramanis


Theory and applications of robust optimization
Siam Review, 2011.
D. Bertsimas and D. Den Hertog
Robust and adaptive optimization
Dynamic Ideas, 2022.
BL Gorissen, I. Yanikoglu and D. den Hertog
A practical guide to robust optimization
Omega, 2015.
D. Bertsimas and M. Sim
The price of robustness
Operations research, 2004.
A. Ben Tal, L El Ghaoui, A. Nemirovski
Robust optimization
Springer, 2009.
V. Leclère Robust Optimization: Static Case November 24, 2023 34 / 34

You might also like