0% found this document useful (0 votes)
36 views6 pages

CMPT 407 - Complexity Theory Lecture 6 - Search-to-Decision, Levin's Universal Search Algorithm

The document discusses search-to-decision reductions and Levin's universal search algorithm. It shows that if P=NP, then NP search problems can be solved in polynomial time. It also describes an algorithm that can solve SAT in polynomial time using Levin's approach, without knowing the exact algorithm, if SAT is known to be in P. The document motivates this by discussing lower bounds for SAT in the time-space hierarchy.

Uploaded by

Anurag Choubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views6 pages

CMPT 407 - Complexity Theory Lecture 6 - Search-to-Decision, Levin's Universal Search Algorithm

The document discusses search-to-decision reductions and Levin's universal search algorithm. It shows that if P=NP, then NP search problems can be solved in polynomial time. It also describes an algorithm that can solve SAT in polynomial time using Levin's approach, without knowing the exact algorithm, if SAT is known to be in P. The document motivates this by discussing lower bounds for SAT in the time-space hierarchy.

Uploaded by

Anurag Choubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CMPT 407 - Complexity Theory

Lecture 6: Search-to-Decision, Levin’s Universal Search


Algorithm
Valentine Kabanets
May 25, 2017

1 “Search-to-Decision” Reductions
Suppose that P = NP. That would mean that all NP languages can be decided in deter-
ministic polytime. For example, given a graph, we could decide in deterministic polytime
whether that graph is 3-colorable. But could we find an actual 3-coloring? It turns out that
yes, we can.
In general, we can define an NP search problem: Given a polytime relation R, a constant
c, and a string x, find a string y, |y| ≤ |x|c , such that R(x, y) is true, if such a y exists. As
the following theorem shows, if P = NP, then every NP search problem can also be solved in
deterministic polytime.
Theorem 1. If NP = P, then there is a deterministic polytime algorithm that, given a
formula φ(y1 , . . . , yn ), finds a satisfying assignment to φ, if such an assignment exists.
Proof. We use a kind of binary search to look for a satisfying assignment to φ. First, we check
if φ(x1 , . . . , xn ) ∈ SAT. Since we assumed that P = NP, this can be done in deterministic
polytime. Then we check if φ(0, x2 , . . . , xn ) ∈ SAT , i.e., if φ with x1 set to False is still
satisfiable. If it is, then we set a1 to be 0; otherwise, we make a1 = 1. In the next step,
we check if φ(a1 , 0, x3 , . . . , xn ) ∈ SAT. If it is, we set a2 = 0; otherwise, we set a2 = 1.
We continue this way for n steps. By the end, we have a complete assignment a1 , . . . , an to
variables x1 , . . . , xn , and by construction, this assignment must be satisfying.
The amount of time our algorithm takes is polynomial in the size of φ: we have n steps,
where at each step we must answer a SAT question. Since, by our assumption, P = NP, each
step takes polytime.
Theorem 1 shows the true importance of proving that NP = P. If NP = P, we could
efficiently generate a correct solution for any problem with an efficient recognition algorithm
for correct solution. For instance, if P = NP, then we could efficiently find a login password
of any user of a network, since checking if a password matches a login name can be done
efficiently. Thus, if P = NP, essentially any secret could be found out efficiently.

1
Remark 2. Consider the language

Composite = {N | some prime p < N divides N }.

This language is clearly in NP. Moreover, there is a known deterministic polytime algorithm
for this porblem (as Primality Testing is in P.)
The corresponding Search-version is basically the Factoring problem: Given N , find its
nontrivial prime factor.
If there were a polytime “search-to-decision” reduction for this problem, we would get a
polytime algorithm for factoring integers! However, no such algorithm is currently known
(and conjectured not to exist).

2 Levin’s Universal Search


Suppose we are told that SAT ∈ P, yet we are not given an actual polytime algorithm for
SAT. Can we still solve SAT in polytime, without knowing the actual algorithm, but just
knowing that it exists? Surprisingly, the answer is Yes! We can use a certain universal SAT
algorithm, based on Levin’s Universal Search algorithm (for inverting one-way functions).
First, suppose that SAT ∈ TIME(nc ). Also suppose that we know c. (If we don’t know c,
the actual polytime bound on solving SAT, we can still solve SAT ourselves, but it will take
slightly more than polynomial time.)

Theorem 3. Suppose SAT ∈ TIME(nc ) for a known constant c > 0, but an unknown algo-
rithm (solving SAT in time O(nc )). Then one can find an explicit polytime algorithm that
solves SAT in time O(nc+2 · log2 n + t0 (n) · n), where t0 (n) ∈ poly(n) is the time it takes to
check if a given assignment satisfies a given SAT instance of size n.

Proof. By the “Search-to-Decision” reduction for SAT, we know that SAT-Search is solved
in time O(nc+1 ). That is, there is a Turing machine, running in time O(nc+1 ), that on a
given SAT instance φ of size n, either finds a satisfying assignment for φ, or decides that φ
is unsatisfiable. (Note that we only know that such a TM exists. We don’t know the actual
TM, as it relies on the decision algorithm for SAT that is not given to us.)
Here’s an algorithm to solve SAT:

“On input φ of size n,


1. for i = 1 to n
2. simulate TM Mi on φ for O(nc+1 · log2 n) steps;
3. if Mi produces a satisfying assignment for φ, then Accept (and halt)
4. endfor
5. Reject”

For the analysis, first observe that the described algorithm never accepts an unsatifiable
formula. On the other hand, if given a satisfiable formula φ of size n, the algorithm will
eventually simulate the TM for SAT-Search (which we know exists) that runs in time nc+1 .

2
Let Md be the TM solving SAT-Search in time nc+1 . The simulation time of this Md on a
universal TM (used by our algorithm) requires time d·nc+1 ·(c+1) log n. When i = d, this TM
Md will be simulated on φ for the amount of time which is bigger than d · nc+1 · (c + 1) log n
for n large enough (for n > d). Thus, for large enough n, our algorithm will discover a
satisfying assignment for φ and correctly accept. This shows that the described algorithm is
correct.
For the running time analysis,we simulate n TMs for O(nc+1 · log2 n) time each, so the
total time our algorithm takes is O(nc+2 · log2 n) plus O(n · t0 (n)) for checking if any of the
n possible strings (produced by the TMs Mi ) is a satisfying assignment.
Remark 4. The described universal algorithm for SAT is good theoretically: it runs only
slightly slower than the assumed fastest algorithm for SAT. However, this algorithm is not
very practical as it starts to work only for very large inputs sizes n  d, where d is the index
of a correct SAT algorithm. Presumably, a fast algorithm for SAT (if it exists at all!) would
be quite complex and long, and so its index d may be a huge constant (exponential in the
description size of the program for Md )!

3 Motivation: Lower bounds for SAT


Even though it is widely believed that NP 6= P, and so that SAT is not in P, we are so far
unable to prove that SAT requires time n2 , or even that SAT requires time n1.1 .
What if we impose an additional requirement of small space? For proper functions
t, s : N → N, define the class TISP(t, s) (for simultaneous Time and Space) to contain
exactly those languages L such that some TM M decides L in time at most t and space at
most s.
With the extra restriction, we are able to prove the following time-space tradeoff for SAT:
Theorem 5 (Fortnow). SAT 6∈ TISP(n1.1 , n0.1 ).
That is, if we restrict our attention to algorithms using space at most n0.1 , we get that any
such algorithm solving SAT would need to use strictly more time than n1.1 . (Equivalently, if
we consider algorithms running in time at most n1.1 , we get that any such algorithm solving
SAT would have to use more than n0.1 space.)
The proof of this result requires the concept of alternating Turing machines, which gener-
alize NP-machines and coNP-machines by allowing alternating “existential” and “universal”
guesses. We explain this next.

4 Polynomial-Time Hierarchy
Recall that a language L ∈ NP can be described by the formula: x ∈ L iff ∃ (short y) R(x, y),
where y is of length polynomial in the length of x, and R is a polytime predicate.
Similarly, a language L ∈ coNP can be described by the formula: x ∈ L iff ∀ (short y)
R(x, y), where y is of length polynomial in the length of x, and R is a polytime predicate.

3
What happens if we allow some k alternating quantifiers over short strings? We get the
kth level of the polynomial-time hierarchy!
We call a k-ary relation R polynomially balanced if, for every tuple (a1 , . . . , ak ) ∈ R, the
lengths of all ai ’s are polynomially related to each other.

Definition 6. For any i ≥ 1, a language L ∈ Σpi iff there is a polynomially balanced (i + 1)-
ary relation R such that

L = {x | ∃y1 ∀y2 ∃y3 . . . Qi yi R(x, y1 , . . . , yi )}.

Here, Qi is ∃ if i is odd, and ∀ if i is even.

For example, Σp1 = NP.

Definition 7. For any i ≥ 1, a language L ∈ Πpi iff there is a polynomially balanced (i + 1)-
ary relation R such that

L = {x | ∀y1 ∃y2 ∀y3 . . . Qi yi R(x, y1 , . . . , yi )}.

For example, Πp1 = coNP.


Note that, in general, for every i, Πpi = coΣpi .

Definition 8. PH = ∪i≥0 Σpi .

Theorem 9. PH ⊆ PSPACE

Proof. Recall that NP ⊆ PSPACE since we can just enumerate (re-using space) over all
candidate witnesses, and check if any one of them is valid. The case of PH ⊆ PSPACE is a
generalization of this idea. (Exercise!)

4.1 Examples of problems in PH


Unique-SAT = {φ | φ is a formula with exactly one satisfying assignment}

Theorem 10. Unique-SAT is in Σp2 .

Proof. Note that φ ∈ Unique − SAT iff there is y such that for all z, z 6= y, we have φ(y) is
True and φ(z) is False.
Min-Circuit = {C | C is a Boolean circuit s.t. no smaller equivalent circuit exists}
Here, the size of a Boolean circuit is the number of logical operations (ANDs, ORs, and
NOTs), or gates, used in the circuit.

Theorem 11. Min-Circuit is in Πp2 .

Proof. Note that C is in Min-Circuit iff for every smaller circuit C 0 there is an input x such
that C(x) 6= C 0 (x).

4
4.2 Alternative definition of PH
Definition 12. An oracle TM is a TM M with special tape, called oracle tape, and special
states q? , qyes , qno . When run with some oracle O (where O is just some language), M can
query O on some strings x by writing these x onto its oracle tape, and then entering the
state q? . In the next step, TM M (miraculously) finds itself in the state qyes if x ∈ O, or the
state qno if x 6∈ O.

This definition of an oracle TM captures the notion of “having access to an efficient


algorithm deciding O”.
For complexity classes C1 and C2 , we say that a language L ∈ C1C2 if there is an oracle
TM M from class C1 that, given oracle access to some language O ∈ C2 , decides L.
For example, U nique − SAT ∈ NPNP : Given a formula φ, nondeterministically guess
an assignment a. Check that φ(a) is True. If not, then Reject; otherwise, construct a new
formula φ0 (x1 , . . . , xn ) ≡ “φ(x1 , . . . , xn )∧[x1 . . . xn 6= a1 . . . an ]00 . Ask the SAT oracle whether
φ0 is satisfiable. If it is, then Reject; otherwise, Accept.
p
Alternative definition of PH. Define Σp0 = Πp0 = P. For all i ≥ 0, define Σpi+1 = NPΣi
p
and Πpi+1 = coNPΣi . Finally, set PH = ∪i≥0 Σpi .

Theorem 13. The original definition and the alternative definition of PH are equivalent.

Proof. The base case of i = 0 is immediate: in both definitions, the 0th level is just the class
P.
Just for the sake of this proof, let us denote by Σ1i and by Σ2i the ith level of polytime
hierarchy according to definitions 1 and 2, respectively. (The first definition is in terms of
logical formulas; the second definition is in terms of oracle TMs.)
We need to show that Σ1i = Σ2i , for all i. The case of i = 0 is already argued. Let us
assume the equivalence of the two definitions for i, and prove it for i + 1.
Let us start by proving that Σ1i+1 ⊆ Σ2i+1 . By definition, L ∈ Σ1i+1 iff there is a poly-
balanced relation R such that x ∈ L ⇔ ∃y1 ∀y2 . . . R(x, y1 , y2 , . . . , yi+1 ). Consider the lan-
guage L0 = {(x, y) | ∀y2 . . . R(x, y, y2 , . . . , yi+1 )}. It is easy to see that L0 ∈ Π1i , and hence,
by the induction hypothesis, L0 ∈ Π2i . Now, to test if x ∈ L we can do the following: Nonde-
terministically guess a y, then check if (x, y) ∈ L0 by querying the Π2i oracle. This algorithm
shows that L ∈ Σ2i+1 .
Let us now prove the other direction, i.e., that Σ2i+1 ⊆ Σ1i+1 . Consider an arbitrary
2
language L ∈ Σ2i+1 . By definition, there is an NPΣi TM M that decides L. Also, we have
that x ∈ L iff there is an accepting computation of M on x.
For any input x, consider a run of the TM M on x. During that computation, the TM
M may ask (up to a polynomial number of) oracle queries to the Σ2i oracle. Some of these
oracle queries have the answer Yes, and the others No. Note that the Yes answers can be
verified in Σ2i , which is equal to Σ1i , by the inductive hypothesis. The No answers can be
verified in Π2i , which is equal to Π1i , by the inductive hypothesis.

5
Thus, to test if x ∈ L, we can guess (using the ∃ quantifier) an accepting computation
path of M on x together with all answers to the oracle queries, and check the correctness
of our path, including all the answers to the oracle queries, in (Σ1i ∪ Π1i ). Put together, this
gives us a way to check whether x ∈ L by a Σ1i+1 formula. Hence, we get L ∈ Σ1i+1 .
Finally, since Πi = coΣi for each of the two definitions, we immediately obtain the
equality Π1i+1 = Π2i+1 .

You might also like