0% found this document useful (0 votes)
67 views18 pages

Notes On NP Completeness: Rich Schwartz November 10, 2013

The document provides notes on NP completeness that cover several key topics: 1) It defines the complexity classes P and NP, where P involves problems solvable in polynomial time by a deterministic Turing machine and NP involves problems solvable in polynomial time by a non-deterministic Turing machine. 2) It proves the Cook-Levin theorem, which shows that the satisfiability problem (SAT) is NP-complete by showing that any problem in NP can be reduced to SAT. 3) It discusses several other NP-complete problems, including 3-SAT, graph 3-colorability, finding a maximal independent set in a graph, and the traveling salesman problem.

Uploaded by

Sam Shoukat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views18 pages

Notes On NP Completeness: Rich Schwartz November 10, 2013

The document provides notes on NP completeness that cover several key topics: 1) It defines the complexity classes P and NP, where P involves problems solvable in polynomial time by a deterministic Turing machine and NP involves problems solvable in polynomial time by a non-deterministic Turing machine. 2) It proves the Cook-Levin theorem, which shows that the satisfiability problem (SAT) is NP-complete by showing that any problem in NP can be reduced to SAT. 3) It discusses several other NP-complete problems, including 3-SAT, graph 3-colorability, finding a maximal independent set in a graph, and the traveling salesman problem.

Uploaded by

Sam Shoukat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Notes on NP Completeness

Rich Schwartz
November 10, 2013

1 Overview
Here are some notes which I wrote to try to understand what NP com-
pleteness means. Most of these notes are taken from Appendix B in Douglas
Wests graph theory book, and also from wikipedia. Theres nothing remotely
original about these notes. I just wanted all this material to filter through
my brain and onto paper. I also wanted to collect everything together in a
way I like. Heres what these notes do.
Define the class P, using deterministic Turing machines.
Define the class NP and the notion of NP completeness, using non-
deterministic Turing machines.
Prove the Cook-Levin Theorem: SAT is NP complete.
Prove that 3-SAT is NP complete.
Prove that graph 3-colorability is NP complete.
Prove that the problem of finding a maximal independent set, or a
minimal vertex covering, in a graph is NP complete.
Prove that finding a directed or undirected Hamiltonian path or cycle
in a directed or undirected graph is NP complete.
Prove that the traveling salesman problem is NP complete.
Prove that the problem of finding an interior lattice point in an integer
polytope is NP complete.

1
2 The Class P
2.1 Deterministic Turing Machines
Informally, a deterministic Turing Machine (DTM) is a finite machine which
hovers over an infinite tape, writing and erasing symbols on the tape accord-
ing to a combination of what is written on the tape and its own internal state.
This idealizes how we might do a calculation. At any given step well modify
what we have written according to a combination of what is underneath our
pen and what we are thinking.
Formally, the tape is a bi-infinite union of squares, arranged along (say)
the x-axis. One symbol is to be written in each square, and the machine,
at any given time, is focused (or hovers) on one of the squares. The Turing
machine itself is a quintuple (A, S, S0 , F, T ), where

A is a finite alphabet, with some distinguished blank state.

S is a finite set of internal states of the machine.

S0 S is the initial state of the machine.

F S is a finite subset of ending or accepting states.

T : A S A S {1, 1} is the transition function.

If the machine finds itself hovering over a letter a and in a state s, then it
computes T (a, s) = (a , s , 1). It replaces a with a , changes to state s , and
moves one unit left or right depending on the sign of 1.
An input (or problem) for the machine is a finite string written on the
paper, and a positioning of the head of the machine that is, where it is
initially hovering. Since the tape is infinite, the finite string is padded out
with blanks so that every square on the bi-infinite tape has a symbol.

2.2 Polytime Decision Problems


A decision problem is simply a problem which says yes or no for an
infinite family of inputs. The problem is Turing computable if there is a pair
of DTMs with the following property. For each input, one or the other of
the machines arrives at an accept state in finite time. In the first case, the
answer to the problem is yes, and in the second case the answer is no.

2
One could also formulate this in terms of a single Turing machine, in which
the accept states are divided into yes states and no states. Here are
some examples of computable decision problems.

Does a graph have an Eulerian circuit?

Does a graph have a Hamiltonian cycle?

Can a graph be embedded in the plane.

Can one 3-color a graph.

In each instance of one of these problems, one takes a specific graph and
inputs it into the DTMs, and then lets the computation run. Typically,
one would encode the graph as an incidence matrix, and then string out
the entries of the matrix so that they fit on the tape. More informally, one
performs some finite algorithm on the graph which leads to an answers.
The decision problem is called polytime if there is a polynomial P such
that both DTMs run in at most P (n) steps, given a problem encoded with
a length n finite string. Informally, n denotes the size of the instance of
the problem. Since the definition is insensitive to the details of P , it often
suffices to have a very rough description of size. For intsance, for graphs,
the size could be either the number of edges or the number of vertices, and
the same problems would be counted as polytime.
The set P denotes the set of polytime decision problems. The decision
problem for Eulerian circuits is in P because one just checks whether or not
all vertices in a given graph have even degree. Such a calculation is certainly
a polynomial in the size of the graph.
The decision problem for Hamiltonian cycles seems not to be in P. One
way to phrase the famous P 6= NP conjecture is simply that The Hamiltonian
cycle problem is not in P. However, this way of phrasing it ignores many
details and makes the P 6= NP problem seem kind of random.

3 The Class NP
3.1 Non-Deterministic Turing Machines
Given a set Y , let 2Y denote the set of subsets of Y . Say that a set function
from X to Y is a map f : X 2Y . A set function from X to Y is a special

3
case of a relation between X and Y . The set up for a non-deterministic
Turing machine (NTM) is exactly the same as for a DTM, except for one
point. The map
T : A S A S {0, 1},
is replaced by a set map from A S into A S {0, 1}. We still denote this
set map by T .
One can think of a NTM in a deterministic way: Given an input to the
machine, the machine makes a tree of calculations. At every step of the
calculation, the machine makes all transitions allowed by the set map T .
The calculation is a success if one of the branches of the tree ends up in F ,
the set of accept states. One could build an auxillary DTM which makes
a breadth-first-search through the calculation tree produced by the NTM.
However, for an infinite (growing) family of inputs, the NTM might stop in
polytime whereas the DTM might require exponential time.
One way to think of a successful calculation on an NTM is that some
oracle feeds the machine a list of choices to make at each stage or maybe
the machine just gets lucky and then the machine is capable of following
these steps and getting to an accept state.

3.2 Non-Deterministic Polytime Problems


A decision problem is said to be a nondeterministic polytime problem if there
is an NTM, , and a polynomial P with the following properties. If an
instance of the problem has size n, then the answer to the decision problem
is yes if and only if lands in an accept state after at most P (n) steps.
We say that the machine is allowed to run for P (n) steps on an input of size
n.
The Hamiltonian circuit decision problem is a classic problem in NP. We
can make an NTM which tries every path in a graph of length at most n2 .
The machine then rejects paths as soon as the revisit a vertex without having
hit all the vertices.
The set NP denotes the set of nondeterministic polytime decision prob-
lems. The famous P 6= NP conjecture is that there exists a problem in NP
that does not belong to P.

4
3.3 Reduction
Suppose that X and Y stand for two decision problems. A reduction from
X to Y is a two-part algorithm. The first part of the algorithm converts any
given instance x of X into an instance of y of Y such that that

The number of steps of the conversion is bounded by a polynomial


function of x.

The size of y is bounded by a polynomial function of x.

The second part of the algorithm converts any given solution of y to a solution
of x in such a way that the number of steps in the conversion is bounded
by a polynomial in the size of y. By algorithm, we mean a pair of auxilliary
DTMs which do the jobs.
In case there is a reduction from X to Y , well write X Y . Essentially
a reduction from X to Y is a procedure whereby one can use a solution to
Y to create an essentially equally efficient solution for X. For instance, if
Y P and X Y , then X P as well.
A decision problem is called NP hard if every problem in NP reduces
to it. An NP hard problem may or may not belong to NP. The decision
problem is called NP-complete if it is NP-hard and it belongs to NP. It is
a nontrivial result that NP complete problems actually exist.

4 Satisfiability (SAT)
4.1 The Cook-Levin Theorem
In particular, Here is one more example of a problem in NP. Let stand
for or. A simple clause is an expression of the form a1 ... an , where
ai is a boolean variable and ai is its negation. That is ai is true if and only
if ai is false. The clause is true if and only if at least one of the variables is
true. For instance a1 a2 is true if and only if (non-exclusively) either a1 is
false or a2 is true.
A compound clause is an expression of the form C1 &...&Ck , where each
Ci is a simple clause. The compound clause is true if and only every one
of the component simple clauses is true. The satisfiability problem (SAT)
asks the following: Given a compound clause, is there a truth-assignment
for the variables which makes the whole thing true? One certainly make

5
a DTM which checks in polytime whether or not a given truth-assignment
works for a given clause. One can then build a NTM whose first step is to
write down all possible truth-assignments, and whose remaining steps follow
the deterministic calculation for each truth-assignment. Hence, SAT is in
NP.

Theorem 4.1 (Cook-Levin) Any decision problem in NP reduces to SAT.


Hence SAT is NP-complete.

Proof: Suppose that: X NP and = (A, S, S0 , F, T ) is a nondetermin-


istic Turing machine solving X. Let x be an instance of X, with input size
n. Let N = p(n) be a bound on the length is allowed to run on N . The
convention is that, if some computational branch of reaches an accept state
before N steps, it just stays there until N steps are reached.
We want to convert x into an instance of SAT. The basic idea is to encode
the entire working of on x into a compount clause which is satisfied if and
only if the reaches an accept state. We introduce the following variables.
Aijk . This variable is true iff letter i is written in square j at step k of
the calculation.
Hik . This variable is true iff is hovering over square i at step k of
the calculation.
Sik . This variable is true iff is in state i at step k of the calculation.
We think of a variable assignment as specifying a single branch of the
computation tree. These clauses enforce the initial set-up.
S00 . This means that starts in state S0 .
H00 . This means that initially hovers over square 0.
Si,x(i),0 . Here x(i) denotes the letter of the input written in square i.
Here |i| p(n). The truth of these variables encodes the fact that x is
the input to .
These clauses enforce the basic properties of any Turing machine.
S ijk S i jk . This means that the two letters Si and Si cannot both
be written in square j at step k. In short, there is only one letter per
square.

6
H ik H i k . This means that cannot hover at 2 distinct squares.

S ik S i k . This means that cannot be in 2 states at once.

(Aijk &Aij,k+1 ) Hik . This means that the tape can only change in a
spot where is focused. This expression is a compound clause because
(a&b) c is the same as (a c)&(b c).

(Aijk &Hjk &Sk ) (Ai ,j ,k+1 &Hj ,k+1 &S ,k+1 ). The expression on
W

the right is taken over all triples related to triple on the left by the
set function T . Together with the other clauses, this enforces the
condition that makes a legal transition at each step. This expres-
sion is a compound clause because (setting B = bi ), the expression
W

(a1 &a2 &a3 ) B is equivalent to a1 a2 a2 B.

This last clause enforces the condition that the computation arrives in an
accept state.

F1N ... FkN , where F1 , ..., Fk are the accept states. Here N = p(n).

If some computational branch of reaches an accept state in at most N


steps, then we can assign truth values to the variables so that the compound
clause is satisfied. Conversely, if we have a truth-assignment which satisfies
the compound clause, we simply use it as a guide for running a branch of .
The conversions back and forth involve only q(n) steps, where q is another
polynomial in n which depends only on p. This completes the proof.

4.2 Reduction to 3-SAT


Say that a 3-clause is an compound clause of the form

(a1 b1 c1 )&(a2 b2 c2 )&...(an bn cn ).

3 SAT is the decision problem which asks whether or not the truth values
of variables can be set in order to satisfies a given 3-clause. I believe that
this theorem is due to Karp. In fact, all the reductions Im going to explain
below are due to Karp.

Theorem 4.2 3-SAT is NP complete.

7
Proof: Since SAT is in NP, and 3-SAT is a sub-problem of SAT, we know
that 3-SAT is in NP. To finish the proof, we just have to show that SAT
reduces to 3-SAT.
The first step is to observe that

(a b) c

is logically equivalent to the 3-clause

(a c c)&(b c c)&(a b c).

Now, observe that the simple clause a1 ... an is equivalent to the clause
 
(b a3 ... an )& (a1 a2 ) b .

Here b is some entirely new variable. The length of the original clause is n and
the maximum length of the simple clauses in the replacement is max(3, n1).
Continuing to make substitutions like this, we produce an equivalent clause
in which the maximum length is 3.

5 Graph Colorings
5.1 Basic Definitions
A proper coloring of a graph is a color-assignment of the vertices of the graph,
so that vertices incident to a common edge have different colors. A graph is
k-colorable if it has a coloring with k colors. The k-coloring decision problem
asks, for each graph, G, is G k-colorable. Call this problem k-COLOR.

Lemma 5.1 2-COLOR is in P.

Proof: One can check whether a graph is 2-colorable by setting some initial
node equal to 0 and then doing a breadth-first-search. The color of each new
vertex encountered is forced by the color of previous vertices. The graph
has a proper 2-coloring if and only if the search ends with no contradictions.
This is a polytime algorithm.

We will see that the remainder of the problems are NP complete.

8
5.2 A Particular Graph
This construction is taken straight from Wests book. Let G be the graph
on the left hand side of Figure 1. We call the 3 leftmost vertices of G the
inputs and the rightmost vertices of G the output. Given a, b, c {0, 1, 2} let
C(a, b, c) denote the set of proper 3-colorings of the G which assign values
a, b, c (top to bottom, say) to the inputs. Here are two easily checked facts
about G.

Any coloring in C(0, 0, 0) assigns 0 to the output.


If a+b+c 6= 0 then there exists some coloring in C(a, b, c) which assigns
a nonzero value to the output.

Figure 1: The Graph G0 with 3 inputs and 1 output.

5.3 3-COLOR
One can easily check in polytime that a given 3-coloring of a graph is proper.
Similar to the situation for 3-SAT, this implies that 3-COLOR is in NP.

Theorem 5.2 (Karp) 3-COLOR is NP complete.

Proof: Since 3-SAT is NP complete, it suffices to reduce 3-SAT to 3-


COLOR. Let C = C1 &...&Cn be a 3-compound clause. Suppose that C
involves the variables b1 , ..., bk . We build a graph GC as follows.
GC has an initial vertex a.
GC has vertices b1 , b1 , ..., bk , bk . We add to GC all the triangles abi bi .
GC has vertices c1 , ..., cn . For the ith clause Ci , we insert a copy of G0 ,
whose inputs are the 3 b (or b) variables of Ci and whose output is ci .
GC has a final vertex d which is connected to every ci by an edge. d
also connects to a.

9
The construction of GC from C takes polynomial time and creates a graph
whose size is a polynomial function of the size of C. We claim that GC is
3-colorable if and only if C is satisfiable.
Suppose first that C is satisfiable. We assume in our construction that
the b-variables are given the truth values which satisfy C. Then we color GC
as follows.

(a) = 2. Here (a) is the color of a.

(bi ) = 0 or (bi ) = 1 depending on whether bi is true or false. Also


(bi ) + (bi ) = 1. Thus, each triangle abi bi is properly colored.

From the fact that C is satisfied, and the properties of G0 , we see that
it is possible to arrange that (cj ) {1, 2} for all j.

We set (d) = 0.

This is a proper coloring.


Conversely, suppose that GC has a proper coloring. By changing the
names of the colors, if necessary, we arrange that (a) = 2 and (d) = 0.
Then (bi ) {0, 1} and (bi ) + (bi ) = 1 because each triangle abi bi is prop-
erly colored. We assign the truth values to the b variables using the coloring.
Since (d) = 0, we must have (ci ) 6= 0 for all i. But then it never happens
that the inputs to the copy of G0 ending at ci are all 0. Hence each clause
Ci is satisfied. Hence C is satisfied. The conversion from the coloring to the
truth-assignment takes linear time.

6 Independent Sets and Vertex Covers


6.1 Independent Sets
Let G be a graph. An independent set of size k, or a k-independent set, is a
collection of k vertices of G, no two are incident to a common edge. One can
fix k and ask the question: Does G have a k-independent set? This decision
problem is in P: If G has n vertices then one simply checks the independence
of each of the n choose k possibilities. There are O(nk ) possibilities, and the
check is also polytime.

10
A more difficult decision problem asks: Given the pair (G, k), does there
exist a k-independent set? This time the size of k is allowed to grow, and is
taken as part of the input. Call this problem INDEP. Note that there is only
one INDEP: it does not have a parameter like the coloring problems. An
argument like the ones above shows that INDEP is in NP. In this section,
well show that INDEP is NP complete.

Lemma 6.1 Let n be the number of vertices of G. Let Kk be the complete


graph on k vertices. Then G has a proper k-coloring set if and only if G Kk
(the graph product) has an n-independent set.

Proof: Suppose that G has a proper k-coloring. Let


[
SG = (v, (v)).
vV (G)

The union takes place over all vertices of G. Certainly SG has cardinality n.
To show that SG is independent, suppos that (v, (v)) and (w, (w)) share
an edge. Then either v = w or (v) = (w). The first case is not possible.
In the second case, when (v) = (w), there must be an edge between v and
w. But this does not happen. Hence SG is an n-independent set.
Conversely, suppose that S is an n-independent set. Consider two ele-
ments of S, namely (v, i) and (w, j). If cannot happen that v = w because
then i and j (like all vertices of Kk ) share an edge. By the pidgeonhole
principle, each vertex of G appears once in the first coordinate of S. We
color the vertices of G according to the value of the second coordinate. The
independence of S guarantees that adjacent vertices get different colors. For
later reference, call this construction ().

Theorem 6.2 (Karp) INDEP is NP complete.

Proof: It suffices to reduce 3-COLOR to INDEP. To prove that G has a


proper 3-coloring, it suffices to check that G K3 has an n-independent set.
The size of the pair (G K3 , n) is a polynomial in n, and the conversion
takes a polynomial number of steps. If some NTM shows that G K3 has
an independent set, when we just trace through the steps of () to get our
proper 3-coloring.

11
6.2 Vertex Covers
A k-covering of a graph G is a collection of k vertices such that every edge
in G is incident to a vertex in the cover.

Lemma 6.3 Let G be a graph with n vertices. Let M be the size of the
maximum independent set and let m be the size of the minimal covering.
Then M + m = n.

Proof: Suppose that S is a minimal cover. Consider the complement


S = V (G) S. If two vertices in S share an edge in G not incident to
a vertex of S. Hence S is independent. If S is an independent set, then
S = V (G) S is a vertex cover. Otherwise some edge is not incident to a
vertex in S and hence has both ends in S.

The problem COVER asks: Given a pair (G, k) does there exist a vertex
cover of G having size k. Arguments like the ones above show that COVER
is in NP. Using Lemma 6.3 one immediately reduces INDEP to COVER.
Hence COVER is NP complete.

7 Hamiltonian Paths and Cycles


7.1 The Key Construction
Figure 2 shows a particular directed graph Q. We think of Q as extending
the union of the two directed horizontal edges by adding in 4 more edges,
making a kind of square.

Figure 2: The graph Q.

12
Let G be a graph and let k be an integer. Were going to construct a
directed graph H = H(G, k) such that H has a directed Hamiltonian path
from a vertex w0 to a vertex wk if and only if G has a k-vertex covering.
Here is the construction. Say that a flag of G is a pair (v, e) where v is a
vertex of G and e is an edge of G incident to v. Let v1 , ..., vn be the vertices
of G. Let di be the degree of vi . Let fi1 , fi3 , fi5 , ... denote the flags having
point vi . The ordering doesnt matter.

Add vertices w0 , ..., wk . We call these the anchors.

Add directed paths P1 , ..., Pn of length 2d1 , ..., 2dn .

Add directed edges from wi to the start of Pj for all i = 0, ..., k 1 and
all j = 1, ..., n.

Add directed edges from the end of Pj to wi for all i = 1, ..., k and all
j = 1, ..., n.

Let Pi1 , Pi3 , ... be the odd consecutive edges of Pi . There are di such
edges. We extend Pab Pcd by a copy of Q whenever fab and fcd are
flags with a common edge.

Lemma 7.1 If G has a k-vertex cover, then H has a directed Hamiltonian


path connecting w0 to wk .

Proof: The idea is to start with a path from w0 to wk and then improve it
until it is Hamiltonian. We can re-index the vertices so that v1 , ..., vk of G
comprise the vertex cover. Let be the path which starts at w0 , then takes
P1 , then moves to w1 , then takes P2 , and so on, until reaching vk .
Let Pcd be any of the odd segments with c > k. There is a unique flag
fab , with a k and b odd, so that fab and fcd share an edge. We modify to
that, instead of directly taking Sab , we go the long way around the copy of
Q joining Sab to Scd . The longer graph now hits the endpoints of Scd . Doing
this for every such Scd , we get out Hamiltonian path.

Well establish the converse to this result in several steps. We will sup-
pose that is a directed Hamiltonian path of H connecting w0 to wk . Our
construction above works with the odd edges Pi1 , Pi3 , .... Now we work with
the even edges Pi2 , Pi4 , .... In the next lemma, Pi0 will stand for an edge

13
connecting an anchor to the start of Pi . Likewise Pi,2di will stand for an edge
connecting the end of Pi to an anchor. Again, If contains such edges, then
they are unique.

Lemma 7.2 Suppose that contains one of Pi,j1 and Pi,j+1 . Then con-
tains both of them.

Proof: Without loss of generality we may take i = 1. For ease of notation,


we take j = 2, and we will show that P12 implies that P14 . This
argument does not cover the general case, but the general case uses essentially
the same reasoning.
P2 P2

P12 P13 P14 P12 P13 P14


Figure 4: Two cases

There are two cases. Suppose first that P13 . If P14 6 , then must
take the path shown on the left side of Figure 4. Since is Hamiltonian,
must contain the red vertex. However, there are only two outgoing edges
from the red vertex, and these both crash back into .
Suppose that P13 6 . Then takes the path on the right hand side of
Figure 4. Since is Hamiltonian, must contain the red dot. But then the
two ingoing edges to the red dot emanate from points already on .

Corollary 7.3 Suppose that any of the following is true:


contains an even edge of Pi .

contains an edge joining an anchor to the start of Pi .

contains an edge joining the end of Pi to an anchor.


All of the above is true and contains every even edge of Pi .

Corollary 7.4 G has a vertex cover of size k.

14
Proof: Let S denote the set of vertices vi of G such that contains an even
edge of Pi . Suppose S has size at least k + 1. By Corollary 7.3, contains
k + 1 edges leaving anchors. But there are only k such edges. In fact, S has
size exactly k, because it happens exactly k times that leaves an anchor.
Let e be some edge of . We want to show that one endpoint of e lies in
S. Suppose this is not the case. Let vi and vj be the endpoints of e. Let pi
and pj be the start points of Pi and Pj respectively. does not contain edges
joining anchors to pi or pj . Hence, must join pi to pj . But then contains
Pi1 and Pj1 . By Corollary 7.3, contains neither Pi2 nor Pj2 . But then
joins the starting points of Pi2 and Pj2 . This shows that has a 4-cycle, a
contradiction. The contradiction shows that S is a vertex cover.

7.2 Directed Hamiltonian Path


Let G be a directed graph, and let v, w be two vertices of G. The decision
problem DHP takes as input (G, v, w) and asks if there is a directed Hamil-
tonian path starting at v and ending at w. Arguments like those above show
that DHP is in NP.

Theorem 7.5 (Karp) DHP is NP complete.

Proof: The proof amounts to reducing COVER to DHP. Let (G, k) be an in-
put problem for COVER. The construction in the previous section produces
a triple (H, w0 , wk ) such that the answer to DHP for (H, w0 , wk ) is yes if and
only if the answer to COVER for (G, k) is yes. The size of H is polynomial in
the size of G. Moreover, there a polytime algorithm to convert the directed
Hamiltonian path in (H, w0 , wk ) to the desired covering of G.

7.3 Hamiltonian Paths and Cycles


Let HP be the vertion of DHP but for undirected graphs. Arguments like
those above show that HP is in NP.

Theorem 7.6 (Karp) HP is NP complete.

15
Proof: The idea is to reduce DHP to HP. Let G be a directed graph. We
form a graph H as follows. For each vertex a of G we produce a 3-path
a1 a0 a1 . We then join a1 to b1 whenever a directed edge of H joins a
to b.
If we want to solve DHP for (G, v, w) we solve HP for (H, v1 , w1 ). If
this latter problem has a solution, then the Hamiltonian path on H has
the following properties.
Since contains v0 , the first edge of connects v1 to v0 .

The second edge of must connect v0 to v1 .

In general, and by induction, once reaches some vertex a1 , the next


vertex reached by must be some b1 .

The last edge of joins w0 to w1 .


If we just list out the vertices v0 , ..., a0 , b0 , ..., w0 hit by we produce vertices
of G which are joined by a Hamiltonian path from v to w.

Finally, we arrive back to the Hamiltonian cycle problem, which we dis-


cussed in connection with a naive formulation of the P 6= N P problem.
Let HC denote the decision problem which asks, does a graph G have a
Hamiltonian cycle. Arguments like those above show that HC is in NP.

Theorem 7.7 (Karp) HC is NP complete.

Proof: The idea is to reduce HP to HC. Suppose we want to solve HP for


some triple (G, v, w). We form a new graph H simply by adding an extra
vertex x and joining it to both v and w. If we can solve HC on H then
the resulting Hamiltonian cycle must visit x and hence must contain, con-
secutively, the edges xv and xw. We just omit these two edges, to get a
Hamiltonian path on G which connects v to w.

8 The Traveling Salesman Problem


Let Kn be the complete graph on n vertices. Suppose that each edge of Kn
is given a non-negative integer weight. The traditional traveling salesman

16
problem asks for the hamiltonian cycle of Kn having minimum total weight.
The decision problem TSP starts with a pair (G, d), where G is a weighted
copy of Kn and d is some integer. The problem asks: Does G have a Hamilo-
nian cycle of weight at most d? Arguments like those above shows that TSP
is in NP.

Theorem 8.1 (Karp) TSP is NP complete.

Proof: The idea is to reduce HC to TSP. Suppose that G is a graph. We


let H be a copy of Kn . Note that H has less than n2 edges. If an edge of H
comes from an edge of G, we give it weight 1. Otherwise, we give the edge
weight n2 . Note that H has a Hamiltonian cycle of weight at most n2 if and
only if G has a Hamiltonian cycle. So, if we can solve TSP for (H, n2 ), then
we can solve HC for G.

The original traveling salesman problem is at least as hard to solve as


TSP, because a solution to the original problem for G immediately solves
TSP for any pair (G, d). When G has integer weights whose sizes are at
most polynomial in the size of G, one can solve the original problem by
solving TSP a polynomial number of times.

9 Interior Lattice Point


Say that an integer polytope is a compact convex polytope P Rn whose
faces are the solution sets of integer linear equations. A lattice point is a
point of Z n . The interior lattice point problem asks: Does the interior of P
contain a lattice point. The size of P in this case is the maximum of

The number of faces of P .

The diameter of P .

Lemma 9.1 ILP belongs to NP

Proof: Using a real linear programming algorithm, such as the simplex


method, one can find a vertex of P in polynomial time. One can then specify
a cube, of side length on the order of n, which contains P in its interior.

17
Next, one can build an NTM which checks whether or not p P by evalu-
ating the equations defining the sides of P on P . Since the NTM considers
all points simultaneously, the NTM runs in polynomial time.

This result is probably due to Karp.

Theorem 9.2 ILP is NP complete.

Proof: The idea is to reduce 3-SAT to ILP. Let C = C1 &...&Cn be some


3-compound clause. Let a1 , a1 , ..., am am be the variables appearing in these
clauses. Let P be the polytope defined by the equations

aj [1, 2] and aj [1, 2].

aj + aj [0, 2].

i + i + i 0. Here i , i , i are the variables of Ci .

The number
of equations defining P is O(n3 ), and the diameter of P is at
most 4 n. Hence, the size of P is polynomial in the size of the clause.
If p is a lattice point in the interior of P , then every coordinte of p is
either 0 or 1. Hence, we can use p to assign truth-values to the variables. By
construction, p assigns true to ai if and only if p assigns false to ai . Also, p
cannot assign false to all of i , i , i . Hence, the truth-assignments specified
by p satify the clause.

18

You might also like