0% found this document useful (0 votes)
0 views

submod-tutorial-2

The document outlines a tutorial on the optimization of submodular functions, focusing on the hardness of constrained submodular minimization and unconstrained maximization. It discusses the complexity of various submodular minimization problems and presents approximation algorithms for unconstrained submodular maximization, including a double-greedy algorithm achieving a 1/2-approximation. The tutorial highlights significant hardness results and approximation limits in the context of submodular optimization.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

submod-tutorial-2

The document outlines a tutorial on the optimization of submodular functions, focusing on the hardness of constrained submodular minimization and unconstrained maximization. It discusses the complexity of various submodular minimization problems and presents approximation algorithms for unconstrained submodular maximization, including a double-greedy algorithm achieving a 1/2-approximation. The tutorial highlights significant hardness results and approximation limits in the context of submodular optimization.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Optimization of Submodular Functions

Tutorial - lecture II

Jan Vondrák1
1 IBM Almaden Research Center
San Jose, CA

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 1 / 24


Outline

Lecture I:
1 Submodular functions: what and why?
2 Convex aspects: Submodular minimization
3 Concave aspects: Submodular maximization

Lecture II:
1 Hardness of constrained submodular minimization
2 Unconstrained submodular maximization
3 Hardness more generally: the symmetry gap

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 2 / 24


Hardness of constrained submodular minimization
We saw:
Submodular minimization is in P
(without constraints, and also under "parity type" constraints).

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 3 / 24


Hardness of constrained submodular minimization
We saw:
Submodular minimization is in P
(without constraints, and also under "parity type" constraints).

However: minimization is brittle and can become very hard to


approximate under simple constraints.
q
n
log n -hardness for min{f (S) : |S| ≥ k }, Submodular Load
Balancing, Submodular Sparsest Cut [Svitkina,Fleischer ’09]
nΩ(1) -hardness for Submodular Spanning Tree, Submodular
Perfect Matching, Submodular Shortest Path
[Goel,Karande,Tripathi,Wang ’09]

These hardness results assume the value oracle model: the only
access to f is through value queries, f (S) =?
Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 3 / 24
Superconstant hardness for submodular minimization

Problem: min{f (S) : |S| ≥ k }.

Construction of [Goemans,Harvey,Iwata,Mirrokni ’09]:


A = random (hidden) set of size k = n
A √
f (S) = min{ n, |S \ A| + min{log n, |S ∩ A|}
log n

n

Analysis: with high probability, a value query does not give any

information about A ⇒ an algorithm will return a set of value n, while
the optimum is log n.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 4 / 24


Overview of submodular minimization

CONSTRAINED SUBMODULAR MINIMIZATION

Constraint Approximation Hardness hardness ref


Vertex cover 2 2 [UGC] Khot,Regev ’03
k -unif. hitting set k k [UGC] Khot,Regev ’03
k -way partition 2 − 2/k 2 − 2/k Ene,V.,Wu ’12
Facility location log n log n Svitkina,Tardos ’07
Set cover n n/ log2 n Iwata,Nagano ’09
√ √
|S| ≥ k Õ( n) Ω̃( n) Svitkina,Fleischer ’09
√ √
Sparsest Cut Õ( n) Ω̃( n) Svitkina,Fleischer ’09
√ √
Load Balancing Õ( n) Ω̃( n) Svitkina,Fleischer ’09
Shortest path O(n2/3 ) Ω(n2/3 ) GKTW ’09
Spanning tree O(n) Ω(n) GKTW ’09

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 5 / 24


Outline

Lecture I:
1 Submodular functions: what and why?
2 Convex aspects: Submodular minimization
3 Concave aspects: Submodular maximization

Lecture II:
1 Hardness of constrained submodular minimization
2 Unconstrained submodular maximization
3 Hardness more generally: the symmetry gap

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 6 / 24


Maximization of a nonnegative submodular function
We saw:
Maximizing a submodular function is NP-hard (Max Cut).

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 7 / 24


Maximization of a nonnegative submodular function
We saw:
Maximizing a submodular function is NP-hard (Max Cut).

Unconstrained submodular maximization: Given a submodular


function f : 2N → R+ , how well can we approximate the maximum?

Special case - Max Cut: T

polynomial-time 0.878-approximation [Goemans-Williamson ’95],


best possible assuming the Unique Games Conjecture [Khot,Kindler,
Mossel,O’Donnell ’04, Mossel,O’Donnell,Oleszkiewicz ’05]
Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 7 / 24
Optimal approximation for submodular maximization

Unconstrained submodular maximization: maxS⊆N f (S)


has been resolved recently:
there is a (randomized) 1/2-approximation
[Buchbinder,Feldman,Naor,Schwartz ’12]
(1/2 + )-approximation in the value oracle model would require
exponentially many queries [Feige,Mirrokni,V. ’07]
(1/2 + )-approximation for certain explicitly represented
submodular functions would imply NP = RP [Dobzinski,V. ’12]

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 8 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

∅ Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


1
2 -approximationfor submodular maximization
[Buchbinder,Feldman,Naor,Schwartz ’12]
A double-greedy algorithm with two evolving solutions:

Initialize A = ∅, B =everything.
In each step, grow A or shrink B.
Invariant: A ⊆ B.

While A 6= B {
Pick i ∈ B \ A;
Let α = max{f (A + i) − f (A), 0}, β = max{f (B − i) − f (B), 0};
α
With probability α+β , include i in A;
β
With probability α+β remove i from B; }

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 9 / 24


Analysis of 21 -approximation
Evolving optimum: O = A ∪ (B ∩ S ∗ ), where S ∗ is the optimum.
We track the quantity f (A) + f (B) + 2f (O):

B
Initially: A = ∅, B = N, O = S ∗ .
A
O f (A) + f (B) + 2f (O) ≥ 2 · OPT .

S∗ At the end: A = B = O = output.


f (A) + f (B) + 2f (O) = 4 · ALG.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 10 / 24


Analysis of 21 -approximation
Evolving optimum: O = A ∪ (B ∩ S ∗ ), where S ∗ is the optimum.
We track the quantity f (A) + f (B) + 2f (O):

B
Initially: A = ∅, B = N, O = S ∗ .
A
O f (A) + f (B) + 2f (O) ≥ 2 · OPT .

S∗ At the end: A = B = O = output.


f (A) + f (B) + 2f (O) = 4 · ALG.

Claim: E[f (A) + f (B) + 2f (O)] never decreases in the process.


Proof: Expected change in f (A) + f (B) + 2f (O) is

α β 2αβ (α − β)2
·α+ ·β− = ≥ 0.
α+β α+β α+β α+β

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 10 / 24


Optimality of 1/2 for submodular maximization

How do we prove that 1/2 is optimal? [Feige, Mirrokni, V. ’07]

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 11 / 24


Optimality of 1/2 for submodular maximization

How do we prove that 1/2 is optimal? [Feige, Mirrokni, V. ’07]

Again, the value oracle model: the only access to f is through value
queries, f (S) =?, polynomially many times.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 11 / 24


Optimality of 1/2 for submodular maximization

How do we prove that 1/2 is optimal? [Feige, Mirrokni, V. ’07]

Again, the value oracle model: the only access to f is through value
queries, f (S) =?, polynomially many times.

Idea: Construct an instance of optimum f (S ∗ ) = 1 − , so that all the


sets an algorithm will ever see have value f (S) ≤ 1/2.
S

f (S) = ψ( |S∩A| |S∩B|


|A| , |B| )
A B
A, B are the intended optimal solutions,
but the partition (A, B) is hard to find.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 11 / 24


Constructing the hard instance

Continuous submodularity:
∂2ψ
If ∂x∂y ≤ 0, then f (S) = ψ( |S∩A| |S∩B|
|A| , |B| ) is submodular.
(non-increasing partial derivatives ' non-increasing marginal values)

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 12 / 24


Constructing the hard instance

Continuous submodularity:
∂2ψ
If ∂x∂y ≤ 0, then f (S) = ψ( |S∩A| |S∩B|
|A| , |B| ) is submodular.
(non-increasing partial derivatives ' non-increasing marginal values)

The function will be "roughly": ψ(x, y ) = x(1 − y ) + (1 − x)y .

S f (A) = 1 f (B) = 1

A B
f (S) = 1/2

However, it should be hard to find the partition (A, B)!

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 12 / 24


The perturbation trick
We modify ψ(x, y ) as follows:
(graph restricted to x + y = 1)

1.0 ψ(x, y )

ψ̃( 12 , 12 )
ψ̃(x, y )
ψ̃(0, 1)
0.5

−δ 0 δ x −y
The function for |x − y | < δ is flattened so it depends only on x + y .

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 13 / 24


The perturbation trick
We modify ψ(x, y ) as follows:
(graph restricted to x + y = 1)

1.0 ψ(x, y )

ψ̃( 12 , 12 )
ψ̃(x, y )
ψ̃(0, 1)
0.5

−δ 0 δ x −y
The function for |x − y | < δ is flattened so it depends only on x + y .
If the partition (A, B) is random, x = |S∩A| |S∩B|
|A| and y = |B| are
random variables, with high probability satisfying |x − y | < δ.
Hence, an algorithm will never learn any information about (A, B).
Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 13 / 24
Hardness and symmetry

Conclusion: for unconstrained submodular maximization,


The optimum is f (A) = f (B) = 1 − .
An algorithm can only find solutions symmetrically split between
A, B: |S ∩ A| ' |S ∩ B|.
The value of such solutions is at most 1/2.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 14 / 24


Hardness and symmetry

Conclusion: for unconstrained submodular maximization,


The optimum is f (A) = f (B) = 1 − .
An algorithm can only find solutions symmetrically split between
A, B: |S ∩ A| ' |S ∩ B|.
The value of such solutions is at most 1/2.

More general view:


The difficulty here is in distinguishing between symmetric and
asymmetric solutions.
Submodularity is flexible enough that we can hide the asymmetric
solutions and force an algorithm to find only symmetric ones.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 14 / 24


Outline

Lecture I:
1 Submodular functions: what and why?
2 Convex aspects: Submodular minimization
3 Concave aspects: Submodular maximization

Lecture II:
1 Hardness of constrained submodular minimization
2 Unconstrained submodular maximization
3 Hardness more generally: the symmetry gap

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 15 / 24


Symmetric instances
Symmetric instance: max{f (S) : S ∈ F} on a ground set X is
symmetric under a group of permutations G ⊂ S(X ), if for any σ ∈ G,
f (S) = f (σ(S))
S ∈ F ⇔ S 0 ∈ F whenever 1S = 1S 0 , where
x̄ = Eσ∈G [σ(x)] (symmetrization operation)

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 16 / 24


Symmetric instances
Symmetric instance: max{f (S) : S ∈ F} on a ground set X is
symmetric under a group of permutations G ⊂ S(X ), if for any σ ∈ G,
f (S) = f (σ(S))
S ∈ F ⇔ S 0 ∈ F whenever 1S = 1S 0 , where
x̄ = Eσ∈G [σ(x)] (symmetrization operation)

Example: Max Cut on K2


x1 x2

X = {1, 2}, F = 2X , P(F) = [0, 1]2 .


f (S) = 1 if |S| = 1, otherwise 0.
Symmetric under G = S2 , all permutations of 2 elements.
For x = (x1 , x2 ), x̄ = ( x1 +x2 x1 +x2
2 , 2 ).
Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 16 / 24
Symmetry gap
Symmetry gap:
OPT
γ=
OPT
where
OPT = max{F (x) : x ∈ P(F)}
OPT = max{F (x̄) : x ∈ P(F)}
where F (x) is the multilinear extension of f .

Example:
x1 x2

OPT = max{F (x) : x ∈ P(F)} = F (1, 0) = 1.


OPT = max{F (x̄) : x ∈ P(F)} = F ( 12 , 21 ) = 1/2.
Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 17 / 24
Symmetry gap ⇒ hardness
Oracle hardness [V. ’09]:
For any instance I of submodular maximization with symmetry gap γ,
and any  > 0, (γ + )-approximation for a class of instances produced
by "blowing up" I would require exponentially many value queries.

Computational hardness [Dobzinski, V. ’12]:


There is no (γ + )-approximation for a certain explicit representation
of these instances, unless NP = RP.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 18 / 24


Symmetry gap ⇒ hardness
Oracle hardness [V. ’09]:
For any instance I of submodular maximization with symmetry gap γ,
and any  > 0, (γ + )-approximation for a class of instances produced
by "blowing up" I would require exponentially many value queries.

Computational hardness [Dobzinski, V. ’12]:


There is no (γ + )-approximation for a certain explicit representation
of these instances, unless NP = RP.

Notes:
"Blow-up" means expanding the ground set, replacing the
objective function by the perturbed one, and extending the
feasibility constraint in a natural way.
Example: max{f (S) : |S| ≤ 1} on a ground set [k ]
−→ max{f (S) : |S| ≤ n/k } on a ground set [n].

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 18 / 24


Application 1: nonnegative submodular maximization

x1 x2

max{f (S) : S ⊆ {1, 2}}: symmetric under S2 .


Symmetry gap is γ = 1/2.
Refined instances are instances of unconstrained (non-monotone)
submodular maximization.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 19 / 24


Application 1: nonnegative submodular maximization

x1 x2

max{f (S) : S ⊆ {1, 2}}: symmetric under S2 .


Symmetry gap is γ = 1/2.
Refined instances are instances of unconstrained (non-monotone)
submodular maximization.
Theorem implies that a better than 1/2-approximation is
impossible (previously known [FMV ’07]).

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 19 / 24


Application 2: submodular welfare maximization

x1 x2 x3 x4 x5 x6

k items, k players; each player has a valuation function


f (S) = min{|S|, 1}, symmetric under Sk .

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 20 / 24


Application 2: submodular welfare maximization

x1 x2 x3 x4 x5 x6

k items, k players; each player has a valuation function


f (S) = min{|S|, 1}, symmetric under Sk .
Optimum allocates 1 item to each player, OPT = k .
OPT = k · F ( k1 , k1 , . . . , k1 ) = k (1 − (1 − k1 )k ).

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 20 / 24


Application 2: submodular welfare maximization

x1 x2 x3 x4 x5 x6

k items, k players; each player has a valuation function


f (S) = min{|S|, 1}, symmetric under Sk .
Optimum allocates 1 item to each player, OPT = k .
OPT = k · F ( k1 , k1 , . . . , k1 ) = k (1 − (1 − k1 )k ).
⇒ hardness of (1 − (1 − 1/k )k + )-approximation for k players
[Mirrokni,Schapira,V. ’08]
(1 − (1 − 1/k )k )-approximation can be achieved
[Feldman,Naor,Schwartz ’11]

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 20 / 24


Application 3: non-monotone submodular over bases

x1 x2 x3 x4 x5 x6 x7
A

B x10 x20 x30 x40 x50 x60 x70

X = A ∪ B, |A| = |B| = k ,
F = {S ⊆ X : |S ∩ A| = 1, |S ∩ B| = k − 1}.
f (S) = number of arcs leaving S; symmetric under Sk .

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 21 / 24


Application 3: non-monotone submodular over bases

x1 x2 x3 x4 x5 x6 x7
A

B x10 x20 x30 x40 x50 x60 x70

X = A ∪ B, |A| = |B| = k ,
F = {S ⊆ X : |S ∩ A| = 1, |S ∩ B| = k − 1}.
f (S) = number of arcs leaving S; symmetric under Sk .
OPT = F (1, 0, . . . , 0; 0, 1, . . . , 1) = 1.
OPT = F ( k1 , . . . , k1 ; 1 − k1 , . . . , 1 − k1 ) = k1 .

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 21 / 24


Application 3: non-monotone submodular over bases

x1 x2 x3 x4 x5 x6 x7
A

B x10 x20 x30 x40 x50 x60 x70

X = A ∪ B, |A| = |B| = k ,
F = {S ⊆ X : |S ∩ A| = 1, |S ∩ B| = k − 1}.
f (S) = number of arcs leaving S; symmetric under Sk .
OPT = F (1, 0, . . . , 0; 0, 1, . . . , 1) = 1.
OPT = F ( k1 , . . . , k1 ; 1 − k1 , . . . , 1 − k1 ) = k1 .
Refined instances: non-monotone submodular maximization over
matroid bases, with base packing number ν = k /(k − 1).
Theorem implies that a better than k1 -approximation is impossible.
Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 21 / 24
Symmetry gap ↔ Integrality gap

In fact: [Ene,V.,Wu ’12]


Symmetry gap is equal to the integrality gap of a related LP.
In some cases, LP gap gives a matching UG-hardness result.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 22 / 24


Symmetry gap ↔ Integrality gap

In fact: [Ene,V.,Wu ’12]


Symmetry gap is equal to the integrality gap of a related LP.
In some cases, LP gap gives a matching UG-hardness result.

Example: both gaps are 2 − 2/k for Node-weighted k -way Cut.


⇒ No (2 − 2/k + )-approximation for Node-weighted k -way Cut
(assuming UGC).
⇒ No (2 − 2/k + )-approximation for Submodular k -way Partition
(in the value oracle model)
(2 − 2/k )-approximation can be achieved for both.

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 22 / 24


Hardness results from symmetry gap (in red)
MONOTONE MAXIMIZATION

Constraint Approximation Hardness hardness ref


|S| ≤ k , matroid 1 − 1/e 1 − 1/e Nemhauser,Wolsey ’78
k -player welfare 1 − (1 − k1 )k 1 − (1 − k1 )k Mirrokni,Schapira,V. ’08
k matroids k + Ω(k / log k ) Hazan,Safra,Schwartz’03

NON-MONOTONE MAXIMIZATION

Constraint Approximation Hardness hardness ref


unconstrained 1/2 1/2 Feige,Mirrokni,V. ’07
|S| ≤ k 1/e 0.49 Oveis-Gharan,V. ’11
matroid 1/e 0.48 Oveis-Gharan,V. ’11
1 1 1
matroid base 2 (1 − ν ) 1− ν V. ’09
k matroids k + O(1) Ω(k / log k ) Hazan,Safra,Schwartz ’03

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 23 / 24


Where to go next?

Many questions unanswered: optimal approximations, online


algorithms, stochastic models, incentive-compatible mechanisms,
more powerful oracle models,...

Two meta-questions:
Is there a maximization problem which is significantly more difficult
for monotone submodular functions than for linear functions?

Can the symmetry gap ratio be always achieved, for problems


where the multilinear relaxation can be rounded without loss?

Jan Vondrák (IBM Almaden) Submodular Optimization Tutorial 24 / 24

You might also like