Solutions For Exercise Sheet 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Computational Complexity, 2012–13 1

Solutions for Exercise Sheet 1


Rather than simply stating the solutions, I will try to describe intuitions and
problem-solving strategies that are natural for each problem. Hopefully, this
will be helpful when solving future exercises, and in the final exam as well. Some
solutions also have a Notes section explaining the motivation behind the problem,
and/or a Related Problems section to help you test your understanding of the
solution method.

1. Question: An infinite-state Turing machine is a Turing machine defined in


the usual manner, except that the state set Q is infinite. The input and tape
alphabets, though, remain finite. Show that for any language L ⊆ {0, 1}∗ ,
there is an infinite-state Turing machine deciding L in linear time.
Solution: Perhaps the most natural way to decide a language or compute
a function is to use a “lookup table”, which tells you the answer for each
possible input. This is not typically useful unless you’re dealing with finite
languages or functions, because Turing machines as they’re usually defined
have a finite description. Allowing the Turing machine to have an infinite
number of states opens up the possiblity of using the simple lookup table
strategy again. We can create a state for each string, essentially recording
whether that string is a YES instance or not. Since there are a countably
infinite number of strings to consider, there will also be a countably infinite
number of states.
More formally, let L be any language of binary strings. We define a Turing
machine M = (Q, Σ, Γ, δ, qi , qf ) as follows. Σ = {0, 1} and Γ = {0, 1, B}(we
won’t be using any tape apart from the read-only input tape). Q contains
a state qw for each string w ∈ {0, 1}∗ , as well as the initial state qi and
final state qf . The transition function is as follows. If the machine is in
state qi and the input symbol being read is 0, the machine goes to state
q0 , otherwise it goes to state q1 . In general, if the machine is in state qw
and reads 0, it goes to state qw 0; if it reads 1, it goes to state qw 1. If it
reads the blank symbol (which means the entire input has been read), it
goes to the accept state qf in case w ∈ L, otherwise the transition function
is undefined (which implies rejection). Clearly, M accepts those w which
are in L and no others. Also, it operates in linear time, since it reads each
input symbol exactly once and then makes a decision.
Notes: This question is meant to illustrate how the concept of computation
becomes trivial when the computational model is not finitistic. It is a
remarkable fact - the Church-Turing thesis - that all strong enough models
of computation that are finitistic are equivalent to each other in terms
of deciding power, and in particular equivalent to the multi-tape Turing
machine.
Computational Complexity, 2012–13 2

|x|
2. Question: Let L = {xy | |x| = |y| and Σi=0 xi yi = 1 mod 2}. Prove:

(a) L ∈ DTIME(n)
(b) L ∈ DSPACE(log(n))

Note: You do not need to specify the Turing machines accepting L in full
detail, but you need to give a clear high-level description and argue that
the resource bounds are as claimed.
Solution: This problem is similar to the examples discussed in class of the
Parity and Duplication languages, for which we analyzed the time and space
complexity.
We solve the first part first. We are asked to construct a deterministic
Turing machine M which decides L in linear time. M should accept iff its
input is of the form xy, where |x| = |y|, and if the inner product of x and
y is odd.
For the first condition to hold, the input length should be even. So we count
the input length first - this also allows us to split up the input into x and y,
which facilitates the computation of the inner product. The counting can be
done by implementing a counter on a read/write tape, and incrementing the
counter for each input bit read. This might seem to take Ω(n log(n)) time,
since the counter is of size O(log(n)) and every increment can take time up
to log(n). However, the amortized complexity of repeated incrementation is
linear - we need to use time i for counter incrementation only about 1/2i ’th
fraction of the time. For instance, when the counter is even, incrementing
it just requires changing one bit.
Once we’ve computed the count, we first check if it’s even; if not, we reject.
If the input length is even, we compute n which is half the input length,
simply by removing the least significant bit from the counter. Once we
know n, we can determine the boundary between x and y on the input
tape, simply by incrementing a new counter each time an input bit is read,
and stopping when the counter reaches n.
Since the computation of the inner product involves multiplying xi and
yi for various bit positions i, it’s convenient to have x and y on different
tapes, which can then be read from left to right while the computation is
performed. So, once we know where y begins on the input tape, we copy
it bit by bit onto a new read/write tape. This only takes linear time. We
then initialize the input tape head to the first bit of x and the tape head
of the new read/write tape to the first bit on that tape, which again takes
linear time. Now we simply scan the tapes from left to right, recording
in our state whether the inner product so far is odd or even. This can be
updated in constant time per input bit read. When we come to the end of
Computational Complexity, 2012–13 3

x and hence of y, we either accept or reject depending on whether the inner


product is odd or even.
It is easy to see that this procedure correctly decides L, and the time taken
by the machine M we’ve defined is linear in n.
For the second part, we need to construct an M 0 deciding L which operates
in a more space-efficient way. Thus we can no longer afford to copy all
of y onto a new tape. So instead, we will just maintain pointers to the
current position in y and the current position in x on a read/write tape, so
that we can update the inner product modulo 2 in the same way as before.
Each such pointer can be represented in O(log(n)) bits, which will give us
a space-efficient computation.
The first part of the operation of M 0 is eactly as before - we count the
input length, and check it’s even. This just involves maintaining a counter,
which only takes logarithmic space. Then we need to maintain an updated
partial inner product modulo 2. This essentially involves maintaining the
sum of wj wn+j for j going from 1 to i, where w is the input. This is because
wj = xj and wn+j = yj . To update this product mod 2, we access xj+1 using
a counter, store it in our state, then access yj+1 = wn+j+1 , again using a
counter, compute the product of these two bits, and record the updated
parity of the inner product in our current state. We then increment j.
When j reaches n, we stop and either accept or reject depending on whether
the current parity is 1 or 0.
Notes: We construct different machines the first of which is time-efficient
and space-efficient. In fact, there is no machine for solving this problem
which is both linear in time and logarithmic in space. This can be proved
formally - you should try it if you welcome a challenge! The point of this
exercise is that there is a tradeoff between time and space complexity for
some natural problems.
3. Question: Show that P 6= NSPACE(n).
HINT: Consider the closures of these classes under polynomial-time m-
reductions.
Solution: Here we are asked to separate two complexity classes. Whenever
we are asked for a complexity class separation, we should try to use diago-
nalization in some way, since this is the only technique we’ve discussed so
far for unconditionally separating classes. Sometimes diagonalization can
be used directly, but here that’s not an obvious possibility, since one of the
classes is a time class and the other a space class. Let’s try to use diago-
nalization indirectly instead, by assuming the opposite of what we’re trying
to prove, and then deriving a contradiction to a hierarchy theorem.
So now assume P = NSPACE(n). The hint suggests that it might be useful
to consider the closure of these classes under polynomial-time m-reductions.
Computational Complexity, 2012–13 4

The closure of P under polynomial-time reductions is clearly P itself. How


about the closure of NSPACE(n)? Well, we saw in class that there are
NP-complete languages in NTIME(n). Basically the same proof (using a
translation technique) gives us that for any language L in NPSPACE, there is
a language L0 in NSPACE(n), such that L m-reduces to L0 . Thus NPSPACE
is contained in the m-closure of NSPACE(n).
We haven’t used our assumed equality P = NSPACE(n) thus far, and we’ll
do so now. Since NPSPACE is contained in the m-closure of NSPACE(n),
by the equality above we get that it’s contained in the m-closure of P, and
hence in P, and hence again in NSPACE(n). But this is a contradiction
to the hierarchy theorem for non-deterministic space (or to the hierarchy
theorem for deterministic space, if we use Savitch’s theorem in addition).
Voila!
Notes: The above proof shows that P and NSPACE(n) are not equal, but
interestingly it doesn’t show explicitly that one of these classes is not con-
tained in the other. We only know that at least one of P 6⊆ NSPACE(n) and
NSPACE(n) 6⊆ P holds, but we don’t know which one(s)!
Related Problems: Try to show that NP 6= DSPACE(n2 ) using a similar
idea. Why is it much easier than this to show that NP 6= DTIME(n2 )?

4. Question: Let L = {< M, x, t >| M is a deterministic TM accepting x within t steps}.


Note that here t is represented in binary. Prove that L 6∈ P.
Solution: Here, again, is a problem where you’re asked to show uncondi-
tionally that some language is not in a complexity class. As I’ve said a few
times in class, proving upper bounds is often quite straightforward, since
we just need to construct a machine deciding our language with the stated
resources. Proving a lower bound is much harder, since we need to show
that no machine operating within certain complexity constraints decides
our language correctly.
Essentially the only results we know showing strong unconditional lower
bounds on general machines are hierarchy theorems. Thus, we should some-
how try to bring in a hierarchy theorem here. But how?
Let’s first try and see what kind of upper bound we can prove on L. De-
ciding L on an input < M, x, t > involves simulating M on x for t steps.
Since t is specified in binary, this can be done in exponential time in the
input length (which is | < M, x, t > |).
There doesn’t seem to be a way to solve L quicker, but “there doesn’t seem”
doesn’t constitute a proof. Is there some stronger property that holds for L
which implies we cannot solve it in polynomial time. A natural property to
look for is completeness - most natural problems turn out to be complete
for some complexity class.
Computational Complexity, 2012–13 5

Is L complete for EXP under polynomial-time m-reductions? Indeed it is,


and the proof is analogous to the proof that our first construction of an NP-
complete language was correct. Let L0 be any language in EXP, accepted
k
by a Turing machine M 0 running in t(n) = 2n steps, for some constant
k. Then the reduction which maps x to < M 0 , x, t > is a polynomial-time
reduction from L0 to L. Note that t can be represented in binary with nk
bits, which ensures that the output of the reduction is of polynomial size.
The EXP-completeness of L enables us to finish the proof. If L were in
P, by completeness of L, so would all of EXP, since EXP is closed under
m-reductions. This would imply EXP = P, which is a contradiction to the
time hierarchy theorem.
Notes: This proof technique is the same as that used in class to show
TQBF not in L. The additional work to be done here was to guess that L
is EXP-complete, and to prove it.
Related Problems: Try to show that the language L is not in NSPACE(logk (n)),
for any constant k

5. Question: Prove the following:

(a) There is a language in NP which is not complete under polynomial-


time m-reductions.
(b) If NP 6= P, there is a language in NP which is not complete under
polynomial-time Turing reductions.

Solution: I am embarrassed to say that this is something of a trick question.


But not too embarrassed, for this is an exercise designed to teach you to be
careful about definitions and details.
We need to show unconditionally that there is a language in NP which is
not complete under polynomial-time reductions. But if NP were equal to
P, wouldn’t every language in NP be complete, since every language in P
would be? So wouldn’t solving this question imply NP 6= P?
Not quite (and thus the million dollars remain unclaimed). It’s true that
every non-trivial language in P is complete under polynomial-time reduc-
tions, but the trivial languages {0, 1}∗ and EMPTY are not. This is for the
trivial reason that if there were an m-reduction from a language L to the
“full” language {0, 1}∗ , then L would also contain all strings. Similarly, if
there were an m-reduction from L to EMPTY, L would be empty.
Since there are languages in NP which are neither full not empty, we can
deduce that the full and empty languages are not NP-complete under m-
reductions.
Computational Complexity, 2012–13 6

For Turing reductions, this argument doesn’t quite work, since L Turing-
reducible to the full language doesn’t imply L is full. But it does imply L ∈
P, since the full language is in P and P is closed under Turing reductions.
Thus if every language in NP were Turing-reducible to the full language,
NP = P. This statement is simply the contrapositive of what we’re trying
to prove.
6. Question: A polynomial-time computable function f : {0, 1}∗ → {0, 1}∗ is
said to be honest if there is a polynomial p such that |x| 6 p(|f (x)|) for
each x. The function f is said to be polynomial-time invertible if there is a
polynomial-time transducer M such that f (M (y)) = y for any string y in
the range of f . Show that all polynomial-time computable honest functions
are polynomial-time invertible if and only if NP = P.
Solution: We need to show two things. First, that if NP = P, then all
polynomial-time honest functions are polynomial-time invertible. Second,
that if all polynomial-time honest functions are polynomial-time invertible,
then NP = P.
Since we’re dealing with functions here rather than decision problems, it’s
natural to try to apply results about NP = P implying efficient solutions
to NP search problems. Notice that inverting an honest poly-time function
is intuitively an NP search problem. Once you guess the solution, i.e., the
inverse z, then verifying the solution is easy - just compute f (z) and check
if it’s equal to the input y.
We follow the same approach as we did in class for showing NP = P implies
we can find satisfying assignments to satisfiable formulae efficiently. We’re
going to find an inverse bit by bit. Since we’re assuming NP = P, we can
use an NP oracle in our search procedure. We use the following NP language
L as our oracle: a string < w, y > is in L if and only if there is a string z
with prefix w such that f (z) = y. L is in NP since we can just guess z with
prefix w and verify that it maps to y in polynomial time.
Now, given input y, the transducer M first asks the oracle the questions
< 0, y > and < 1, y >. If it gets a no answer to both these questions, then
it outputs an arbitrary string, since in this case y is not in the range of f ,
and we’re only interested in y which is in the range of f . If it gets a yes
answer to at least one questions, assume without loss of generality that the
yes answer is to < 0, y >. It then asks the oracle the questions < 00, y >
and < 01, y >. In general, if it gets a yes answer to the question < w, y >,
it asks the oracle the questions < w0, y > and < w1, y > and sets w to
w0 if the first question is answered yes, and to w1 if the second question is
answered yes. If neither question is answered yes, it outputs w.
The intuition is that w is a prefix of an inverse to y, and we would like to
extend this prefix incrementally and recover the entire inverse. As long as
Computational Complexity, 2012–13 7

w is a proper prefix of an inverse, either the question to < w0, y > or the
question to < w1, y > will receive a yes answer, since the next bit of the
inverse is either 0 or 1. When w is no longer a proper prefix, then neither
question might return yes, but at this point we know that w itself is an
inverse.
The transducer M used an oracle in NP, but since by assumption NP = P,
there is an equivalent polynomial-time transducer without an oracle.
For the other direction, it’s enough to show that if all honest poly-time
functions are invertible, then SAT is in polynomial time. Let’s try to define
an honest poly-time function such that inverting it will enable us to solve
SAT. A natural candidate is the function f that takes the pair < φ, w >
as input, where φ is a formula and w is a candidate assignment, and out-
puts φ0 is w is a satisfying assignment, and φ1 otherwise. This function
is polynomial-time computable, since computing it just involves checking
whether a given assignment is a satisfying assignment. It is also honest,
since the length of an assignment is at most the length of the formula,
hence | < phi, w > | is no more than polynomially bigger than |phi|.
Assume that this function is polynomial-time invertible via a transducer
M . Then we can solve SAT on input φ as follows. We run M on φ0. If the
output of M on this input is w, we check that w is a satisfying assignment to
φ. If yes, we answer yes on φ, otherwise we answer no. This is a polynomial-
time procedure and using the definition of the function f , it is easy to see
that this procedure decides SAT correctly.
Notes: The motivation for this problem is from cryptography where the
notion of a function that is easy to compute but hard to invert (known as
“one-way”) is very important. Intuitively, the easiness of computing a one-
way function corresponds to the easiness of encrypting a message, and the
hardness of inverting the function corresponds to the hardness of decrypting
the message for a malicious adversary.
Related Problems: Is the assumption that the functions are honest strictly
necessary? Why, or why not? Also, show that if we consider log-space
computable honest functions rather than polynomial-time computable ones,
the theorem still goes through (but the condition on invertibility is still that
it can be done in polynomial time).

Rahul Santhanam, March 2013

You might also like