0% found this document useful (0 votes)
26 views

UNIT 5 Approximation Algorithms

Uploaded by

vanitha.thandur
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

UNIT 5 Approximation Algorithms

Uploaded by

vanitha.thandur
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 59

INTRODUCTION TO

APPROXIMATION
ALGORITHMS
Outline of the
lecture

► Background
► Definition of approximation algorithms
► Some examples of approximation
algorithms
► Conclusion
Optimization
Problem

1. In mathematics and computer science, an optimization


problem is the problem of finding the best solution from all
feasible solutions.
2. The objective may be either min. or max. depending on the
problem considered.
3. A large number of optimization problems which are
required to be solved in practice are NP-hard.
4. For such problems, it is not possible to design algorithms
that can find exactly optimal solution to all instances of the
problem in polynomial time in the size of the input, unless P
= NP.
Copying with NP-
hardness
► Brute-force algorithms
1. Develop clever enumeration strategies.
2. Guaranteed to find optimal solution.
3. No guarantees on running time.
► Heuristics
1. Develop intuitive algorithms.
2. Guaranteed to run in polynomial time.
3. No guarantees on quality of solution.
► Approximation algorithms
1. Guaranteed to run in polynomial time.
2. Guaranteed to get a solution which is close to the optimal
solution (a.k.a near optimal).
3. Obstacle : need to prove a solution’s value is close to optimum value,
without even knowing what the optimum value is !
Approximation Algorithm :
Definition

Given an optimization problem P, an algorithm A is said to


be an approximation algorithm for P, if for any given instance I,
it returns an approximate solution, that is a feasible solution.
Types of
approximation
P An optimization problem
A An approximation algorithm
I An instance of P
A (I)

Optimal value for the instance I
A(I) Value for the instance I generated by A

1. Absolute approximation
) A is an absolute approximation algorithm if there exists a
constant k
such that, for every instance I of P , |A∗ (I) − A(I)| ≤ k.
) For example, Planar graph coloring.
2. Relative approximation

) such
A is that, for every
an relative approximation I of P , AA(I)
instancealgorithm if
(I) A(I)
max{there
A ∗ (I)
,
exists a
Vertex k
constant
) } ≤ k.
cover.
Examples we discuss in this
lecture

1. Vertex Cover
2. Traveling Salesman
Problem
3. Bin Packing
1) Vertex
Cover

Instance An undirected graph G = (V , E ).


Feasible Solution A subset C ⊆ V such that at least one
vertex of every edge of G belongs C .
Value The value of the solution is the size of the
cover, |C |, and the goal is to minimize it.
Exampl
e

(a (b
) )

(c) (d)
Figure: (a) An undirected graph (b) A trivial vertex cover (c) A vertex
cover (d) An other vertex cover
Greedy
algorithm 1

1. C ← ∅
2. pick any edge (u, v ) ∈ E .
3. C = C ∪ { u, v }
4. remove all edges incident on either u or v
from E .
5. repeat the process till all edges are removed
from E .
6. return C .
Algorithm 1 with an
example

(a) (b (c)
)

A C

B
(d) (e) (f
)
Figure: Execution of algorithm 1

- |C | = 6 (blue vertices) and |C ∗| = 3 (red vertices)


- However if we had picked the edge (B, C ) we would
have |C | = 4
Performance analysis of
algorithm 1

1. The edges picked by the algorithm is a maximal matching


(say M), hence C is a vertex cover.
2. The algorithm runs in time polynomial of input size.
3. The optimum vertex cover (say C ∗ ) must cover every
edge in M.
4. Hence C ∗ contains at least one of the end points of each
edge in M, implies |C ∗| ≥ |M|.
5. |C | = 2 ∗ |M| ≤ 2 ∗ |C ∗|, where C ∗ is an optimal
solution.
6. Thus there is a 2-factor approximation algorithm for
vertex cover problem.
Tight example :
Kn,n
1 1
2 2
3 3

. .
n n

Figure: Complete bipartite graph kn,n: a tight example for


algorithm 1

- Size of any maximal matching of this graph = n, hence |M|


= n.
- So, our algorithm always produces a cover C of size 2n.
- But, clearly the optimal solution size = n.
Greedy
algorithm 2

1. C ← ∅
2. take a vertex v ∈ V of maximum degree (tie can be
broken arbitrarily).

3. C = C ∪ { v }
4. remove all edges incident on v from E .
5. repeat the process till all edges are removed from
E.
6. return C .
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Algorithm 2 with an
example

Figure: Execution of
algorithm 2
Performance analysis of
algorithm 2
Generalizing the previous example
k! vertices of degree k

···

·· ··
k! ···
k vertices of ·degree k k!
k 1
·
··· k! vertices of degree 1

1. The verticesvertices
picked ···
by the algorithm forms a vertex
of degree k − 1
cover.
2. The algorithm runs in polynomial time of input size.
3. Optimum solution C ∗ contains all top vertices, |C ∗|
= k!.
5. Hence |C | = k1k!( 1 + · · · + 1) ≈ k! log k = log ∗
4.+Solution C given (k− 1by algorithm
k|C | 2 contains all bottom
vertices.
Open
problem

► Design an approximation algorithm which gives a better


approximation.
► A better approximation
[Karakostas, 2009] (Ratio : √for
ratio 1
the
) vertex cover problem by
log
2− n
► There is no α-approximation algorithm for vertex cover 6
unless < =
with α P 7 NP [H˚astad, 2
001].
2) Traveling salesman problem
(TSP)

► The TSP describes a salesman who must travel between n


cities.
► The order in which he visits cities does not matter and he
visits each city during his trip, and finishes where he was
at first.
► Each city is connected to other cities by airplanes, or
by road or railway.
► The salesman wants to keep the travel cost he travels
as low as possible.
Reducing TSP to graph
problem

Instance A complete weighted undirected graph G = (V ,


E ) with non-negative edge weights.
Feasible Solution Find a cycle that visits each vertex exactly
once.
Value The value of the solution is the sum of the
weights associated with edges in the cycle, and
the goal is to find a cycle whose total weight is
minimum.
Exampl
e

C2

100

C1
31
66
C3 85

101
20
65

C4 110 C5
Exampl
e

C2

100

C1
31
∞ 66
C3 85

101
20
65

C4 110 C5
Exampl
e

C2

100

C1
31
∞ 66
C3 85

101
20
65

C4 110 C5
Hardne
ss

1. Bad news : Hard to approximate !


) For any c > 1, there is no polynomial time algorithm
which can approximate TSP within a factor of c,
unless P = NP.
) In fact, the existence of an O(2n) - approximation
algorithm would imply that P = NP.
) A simple reduction from Hamiltonian cycle.
2. Good news : Easy to approximate if edge weights
satisfy triangle inequality.
) The triangle inequality holds in a (complete) graph with
w (u, v ) function
weight ≤ w (u, xw) +
onwthe edges if for any three vertices
(x,v,v ).
u, x in the graph

) TSP with triangle inequality is also known as (a.k.a) Metric


TSP.
) Metric TSP is still NP-hard, but now we can approximate.
Algorithm 1 : Nearest addition
algorithm
1. Start with a tour T , which initially includes two closest
nodes
2. u, v ∈ V and let S = { u, v } .
Repeat until |S| = n
(a) Find a node vj ∈ V \ S that is closest to S. Let vj is closest
to the
(b) node vi ∈ S; further vk ve the node following vi in T
. Update T by detouring vivk by vi vj vk , and set S =
S ∪ {v j }
30 20 12 30 20 12

10 10

40 40

30 20 12 30 20 12

10 10

40 40

30 20 12 30 20 12

10 10

40 40
Analysis of
Algorithm 1
1. The algorithm is closely related to Prim’s MST algorithm.
2. The edges identified form a MST.
3. The cost of the optimal tour is at least the cost of MST on
the same input.
4. The cost of the tour on the first two nodes vi and vj is
exactly 2ci,j .
5. Consider an iteration in which a node vj is inserted between
nodes vi
and vk in the current tour.
6. Increase in the length of the tour is ci,j + cj,k − ci,k .
7. But, by the triangle inequality cj,k ≤ cj,i + ci,k
8. Increase in cost in this iteration is at most ci,j + cj,i = 2ci,j .
9. Hence the final tour has cost at most twice the cost of the
MST.
10. Thus Algorithm 1 is a 2-factor approximation algorithm.
Algorithm 2 : Double tree
algorithm

1. Compute a MST.
2. Replace each edge of MST by two copies
of itself.
3. Find an Eulerian tour.
4. Shortcut the tour to get a Hamiltonian
cycle.
5. Return the Hamilton cycle.
Exampl
e

C2

100

C1
31
66
C3 85

101
20
65

C4 110 C5
Exampl
e

C2

C1
31
66
C3

20
65

C4 C5
Exampl
e

C2

C1 31
31
66

66 C3

20
65 65
20

C4 C5
Exampl
e

C2

C1 31 Eulerian tour
31
66

66 C3 C1 , C3 , C4 , C3 , C5 , C3 , C2 , C3 , C1

20
65 65
20

C4 C5
Exampl
e

C2

10
0 Eulerian tour
C1 C1 , C3 , C4 , C3 , C5 , C3 , C2 , C3 ,
C1
66
C3 8
5 Hamilton Cycle
C 1, C 3, C 4, C 5, C 2, C 1
20

C4 11 C5
0
Analysis of
Algorithm 2

1. Let C ∗ be the cost of an optimal tour.


2. The cost of MST ≤ C ∗
3. Cost of the Eulerian tour ≤ 2C ∗ .
4. Cost of the Hamilton cycle ≤ Cost of the Eulerian tour
(by triangle inequality).
5. Thus Algorithm 2 is a 2-factor approximation algorithm.
Some
results

1. It is possible to get approximation factor 1.5 (Christofide’s


algorithm).
2. Unless P = NP, for any constant22 α < .0045,
21
0
α-approximation algorithm for metric
9 ≈ no
TSP
1 exists.
3) Bin
Packing

Instance n items with sizes a1 , a2 , . . . , an (0 < ai ≤ 1).


Feasible Solution A packing in unit-sized bins.
Value The value of the solution is the number of bins
used, and the goal is to minimize the number.
Exampl
e
Exampl
e
Algorithm 1: First
Fit
1. Place the items in the order in which they arrive.
2. Place the next item into the lowest numbered bin in
which it fits.
3. If it does not fit into any open bin, start a new bin.
Ex: Consider the set of items
S = {0.4, 0.8, 0.5, 0.1, 0.7, 0.6, 0.1, 0.4, 0.2, 0.2} and bins
0.1
of size 1 0.1 0.2 0.4
0.5

0.8 0.7 0.6


0.4
0.2

Figure: Packing under First


Fit
Analysis of First fit
algorithm

1. Let C ∗ be the optimum number of bins.


2. Suppose our algorithm uses C bins.
3. Then, at least (C − 1) bins are more than half full (since
we never
have two bins less than half
full). Σ
4. C∗ ≥ n ai > C− 21 =⇒ 2C∗∗ > C − 1 =⇒ 2C ≥ C (as all are
i =1
integers)
5. Hence 2-
factor
Algorithm 2: Next fit
algorithm
1. Place the items in the order in which they arrive.
2. Place the next item into the current bin if it fits.
3. If it does not, close that bin and start a new bin.
Ex: Consider the set of items
S = {0.4, 0.8, 0.5, 0.1, 0.7, 0.6, 0.1, 0.4, 0.2, 0.2} and
bins of size 1

0.2
0.1
0.1
0.2
0.8 0.7
0.5 0.6 0.4
0.4

Figure: Packing under Next


Fit
Algorithm 3: Best fit
algorithm
1. Place the items in the order in which they arrive.
2. Place the next item into that bin which will leave the least
room left over after the item is placed in the bin.
3. If it does not fit in any bin, start a new bin.
Ex: Consider the set of items
S = {0.4, 0.8, 0.5, 0.1, 0.7, 0.6, 0.1, 0.4, 0.2, 0.2} and bins of
size 1
0.1
0.1 0.2 0.4
0.5

0.8 0.7
0.4 0.6
0.2

Figure: Packing under Best


Fit
► Algorithm 2 and Algorithm 3 are 2-factor approximation
algorithms.
► All the 3 algorithms are heuristics.
► Unless
the = NP, there
binPpacking problemcanfor
notany
exist
2
a α-approximation
algorithm
< .
- Reduction
α 3
forfrom Partition
problem
Polynomial time approximation scheme
(PTAS)
1. A PTAS is a family of algorithms {A є }.
2. There is an algorithm for every ϵ > 0.
3. {A є } is a (1 + ϵ)- approximation algorithm for minimization
problems.
4. {A є } is a (1 − ϵ)-approximation algorithm for maximization
problems.
5. The running time is required to be polynomial in n for every
fixed ϵ e.g., 2c
rapidly,
7. but
We can
O(n ). be
have a different
Fully PTASfor(FPTAS)
differentwhen
ϵ. its running time is
6. As the
onlyϵ decreases
polynomial
not theєinrunning
in n but also 1
, e.g.,time of the algorithm can
8. increase
O(( ) n ). and MTSP
1 3 2 є
Bin Packing can not admit a PTAS unless P =
NP. Why?
9. However there is a PTAS for Euclidean TSP (given a set P
of points in the Euclidean plane, find a tour of minimum of
cost that visits all the points of P [Arora, 1996]).
Shifting Strategy [Hochbaum and
Maass, 1985]

Figure: Points in the plane in an


enclosed area
Shifting Strategy [Hochbaum and
Maass, 1985]

Figure: Covering the points with minimum number of disks of


diameter D
Shifting Strategy [Hochbaum and
Maass, 1985]

Figure: The area I is subdivided into vertical strips of


width D
Shifting Strategy [Hochbaum and
Maass, 1985]

D D D
l×D

Figure: Groups of l consecutive


strips
Shifting Strategy [Hochbaum and
Maass, 1985]

D D D

Figure: After first


shift
Shifting Strategy [Hochbaum and
Maass, 1985]

D D D

Figure: After second


shift
Shifting Strategy [Hochbaum and
Maass, 1985]

D D D

Figure: After third


shift
Conclusio
n

1. Basics of approximation.
2. Need of approximation algorithms.
3. Some examples, Vertex cover, TSP, and Bin
packing.
4. PTAS
Some
books

1. The Design of Approximation Algorithms by David P. Williamson


and David B. Shmoys, First Edition, 2011.
2. Geometric Approximation Algorithms by Sariel Har-Peled,
First Edition, 2011.
3. Approximation Algorithms by Vijay V. Vazirani, First
Edition.
References
I
Arora, S. (1996).
Polynomial time approximation schemes for euclidean tsp
and other geometric problems.
In Foundations of Computer Science, 1996. Proceedings., 37th
Annual Symposium on, pages 2–11. IEEE.
H˚astad, J. (2001).
Some optimal inapproximability results.
Journal of the ACM (JACM), 48(4):798–859.
Hochbaum, D. S. and Maass, W. (1985).
Approximation schemes for covering and packing
problems in image processing and VLSI.
Journal of the ACM (JACM), 32(1):130–136.
Karakostas, G. (2009).
A better approximation ratio for the vertex cover problem.
ACM Transactions on Algorithms (TALG), 5(4):41.
T HANK
Y OU!

You might also like