0% found this document useful (0 votes)
3 views69 pages

Lecture 36

This lecture discusses space complexity in the context of Turing machines, focusing on deterministic and non-deterministic space complexity. It explores examples of space-efficient Turing machines, such as those that accept palindromes and the reachability problem in graphs. Key concepts include the definitions of SPACE and NSPACE, the implications of space efficiency on algorithm design, and notable theorems related to space complexity.

Uploaded by

haniasohail777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views69 pages

Lecture 36

This lecture discusses space complexity in the context of Turing machines, focusing on deterministic and non-deterministic space complexity. It explores examples of space-efficient Turing machines, such as those that accept palindromes and the reachability problem in graphs. Key concepts include the definitions of SPACE and NSPACE, the implications of space efficiency on algorithm design, and notable theorems related to space complexity.

Uploaded by

haniasohail777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Theory of Computation

Lecture #36

Sarmad Abbasi

Virtual University

Sarmad Abbasi (Virtual University) Theory of Computation 1 / 65


Lecture 33: Overview

SPACE COMPLEXITY
Deterministic Space Complexity
Non-deterministic space complexity
Examples of Space efficient TMs.
Reachability Problem
Savitch’s Theorem

Sarmad Abbasi (Virtual University) Theory of Computation 2 / 65


Lecture 33: Space Complexity

We would now like to study space complexity. The amount of space or


memory that is required to solve some computational problem. The
following model of computation is the most convenient to study space
complexity:

Sarmad Abbasi (Virtual University) Theory of Computation 3 / 65


Space Complexity

Consider a Turing machine M which has:


1 An input tape that is read only. The input is encloses in # symbols.
2 A work tape that is initially all blank.
3 The machine is not allowed to write a blank symbol on the work
tape.
Here is a picture of a machine:

Sarmad Abbasi (Virtual University) Theory of Computation 4 / 65


Space Complexity

Here is a hypothetical picture of the machine after it halts:

Sarmad Abbasi (Virtual University) Theory of Computation 5 / 65


Space Complexity

The space used by this machine on input x is the number of non-blank


symbols on the work tape at the end of the computation. Note that in
this case:
1 We do not count the input as space used by the machine.
2 The machine is allowed to read the input again and again.
3 However, since the input tape is read only the machine cannot use
that space to store intermediate results.

Sarmad Abbasi (Virtual University) Theory of Computation 6 / 65


Space Complexity

A good realistic machine to keep in mind is a computer with a CD-rom.


The input is provided on the CD-rom and we are interested in figuring
out how much memory would be required to perform a computation.

Sarmad Abbasi (Virtual University) Theory of Computation 7 / 65


Space Complexity

Let M be a Turing machine that halts on all inputs. We say that M runs
in space s(n) if for all inputs of length n the machine uses at most s(n)
space.

Sarmad Abbasi (Virtual University) Theory of Computation 8 / 65


Space Complexity

Let us look at a concrete example. Let us consider the language

L = {x ∈ {0, 1}∗ : x = x r }

this is the set of all palindromes over {0, 1}. Lets look at two TMs that
accept this language and see how much space they require.

Sarmad Abbasi (Virtual University) Theory of Computation 9 / 65


Space Complexity

Let us describe a TM M1 that accepts L as follows we will give a high


level description.
1 The machine copies the input to the work tape.
2 It moves one head to the left and the other to the rightmost
character of the copy of the input.
3 By moving one head left to right and the other right to left it makes
sure that the characters match.
4 It all of them match the machine accepts otherwise it rejects.

Sarmad Abbasi (Virtual University) Theory of Computation 10 / 65


Space Complexity

It is not hard to see that the space required by M1 is linear. Thus M1


accepts L in space O(n). The question is can we accept this language
in less space.

Question
Is it possible to accept L in space o(n)?

Sarmad Abbasi (Virtual University) Theory of Computation 11 / 65


Space Complexity

Now we want to save space. How about the following machine:


1 Read the first character and put a dash on it.
2 Match it with the last character which does not have a cross on it
and cross it.
3 If all characters have a dash and get matched then accept.
Here is a picture of how this machine works:

Sarmad Abbasi (Virtual University) Theory of Computation 12 / 65


Space Complexity

In the middle of the computation:

Sarmad Abbasi (Virtual University) Theory of Computation 13 / 65


Space Complexity

At the end of the computation.

Sarmad Abbasi (Virtual University) Theory of Computation 14 / 65


Space Complexity

The problem is that we are not allowed to write on the input tape.
Since, it is read only. Thus we must copy the contents on the input
tape to the work tape. This would give a machine that takes space n at
least. Can we some how do better? Here is another idea.

Sarmad Abbasi (Virtual University) Theory of Computation 15 / 65


Space Complexity

Let us describe a TM M2 that accepts L.


1 The machine write 1 on its work tape. We call it the counter.
2 It matches the first symbol with the last symbol.
3 In general it has i written on the counter and it matches the i-th
symbol with the i-th last symbol.
4 In order to match get to the i-th symbol the machine can use
another counter that start at i and decrements each time it moves
it head on the readonly head forward.
5 Similarly to get to the i last symbol it can use another counter.
This machine only uses three counters. Each counter needs to store a
number in binary. The number will reach the maximum value of n.

Sarmad Abbasi (Virtual University) Theory of Computation 16 / 65


Space Complexity

To represent n in binary we need only log2 n bits. Thus the machine


will require only
O(log n)
space. This is a real improvement over the previous machine M1 in
terms of space.

Sarmad Abbasi (Virtual University) Theory of Computation 17 / 65


Space Complexity

A few points to remember that are made clear by this example are:
1 If you are claiming that you have a sublinear space TM then make
sure that you are not overwriting the input or using that space.
2 The machine M1 takes time O(n) and space O(n). On the other
hand M2 takes time O(n2 ) and space O(log2 n). So M1 is better in
terms of time and M2 better in terms of space. This is quite typical.
3 If you are asked to design a machine that takes less space then
no consideration should be given to time.

Sarmad Abbasi (Virtual University) Theory of Computation 18 / 65


Space Complexity

If M is a non-deterministic TM. Suppose that all branches of M halt on


on all possible input. We define the space complexity f (n) to be the
maximum number of tape cells that M scans on any branch of its
computation for any input of length n.

Non-deterministic space is a useful mathematical construct. It does


not correspond to any realistic model of computation.

Sarmad Abbasi (Virtual University) Theory of Computation 19 / 65


Space Complexity

Let f : N → N be a function. We define,

SPACE(f (n)) = {L : L is accepted by an O(f (n)) space TM}.


Similarly, we define

NSPACE(f (n)) = {L : L is accepted by an O(f (n)) space NTM}.

Sarmad Abbasi (Virtual University) Theory of Computation 20 / 65


Space Complexity

Let us consider some facts which make space complexity very different
from time complexity.

A sublinear time TM cannot completely read its input. Therefore the


languages accepted in sublinear time are not very interesting. This is
not the case with sublinear space algorithms. In fact, it is very
interesting to study sublinear space algorithms.

Sarmad Abbasi (Virtual University) Theory of Computation 21 / 65


Space Complexity

Let us look at an NP-complete problem. We studied CNFSAT let us


see if we can come up with an space efficient algorithm for this
problem. Let us say we are given a boolean formula φ on n variables in
CNF. Let
x1 , x2 , . . . , xn
be the boolean variables that appear in φ and

C1 , . . . , Cm

be the clauses of φ.

Sarmad Abbasi (Virtual University) Theory of Computation 22 / 65


Space Complexity

Consider the following algorithm (TM)


1 For each possible assignment of x1 , x2 , . . . , xn
2 For each i = 1, . . . , m if Ci is made true accept.
3 Reject.
This idea can be implemented in linear space.

Sarmad Abbasi (Virtual University) Theory of Computation 23 / 65


Space Complexity

1 Each assignment of x1 , x2 , . . . , xn can be stored in n bits.


2 To make sure Ci is made true. We just have to read the clause
and make sure if one of the literal’s is made true in each clause.

Sarmad Abbasi (Virtual University) Theory of Computation 24 / 65


Space Complexity

This shows:
Theorem
SAT is in SPACE(n).

On the other hand we know that it SAT is NP-complete. Thus that


leads to the conjecture that

P 6⊆ SPACE(n).

But this is only a conjecture!

Sarmad Abbasi (Virtual University) Theory of Computation 25 / 65


Space Complexity

Let us look at a non-deterministic linear space machine. Let us


consider the following language

ENFA = {hAi : A is an NFA and L(A) = ∅ }.


We will devise a non-deterministic linear space algorithm that decides

ENFA

Sarmad Abbasi (Virtual University) Theory of Computation 26 / 65


Space Complexity

Let us recall some facts about DFAs and NFAs


1 If D is a deterministic finite automata with k states and L(D) 6= ∅
then D accepts a string of length at most k .
2 If A is an NFA with q states then there exists a DFA D such that

L(A) = L(D)

and the number of states in D is at most 2q .

Sarmad Abbasi (Virtual University) Theory of Computation 27 / 65


Space Complexity

As a consequence we get:

Theorem
If A is a NFA with q states with L(A) 6= ∅ then A accepts a string of
length at most 2q .

Sarmad Abbasi (Virtual University) Theory of Computation 28 / 65


Space Complexity

Let us consider the following non-deterministic algorithm (TM) N:


1 On input hAi a NTM.
2 Let q be the number of states in A.
3 Non-deterministically guess an input x of length at most 2q .
4 Simulate A on x.
5 If A accepts x then accept. Else reject.

Sarmad Abbasi (Virtual University) Theory of Computation 29 / 65


Space Complexity

Now, if L(A) = ∅ then all branches of the above NTM will reject. On the
other hand if L(A) 6= ∅ then there will be at least one branch that will
accept. Thus
L(N) = ENFA

Sarmad Abbasi (Virtual University) Theory of Computation 30 / 65


Space Complexity

The above NTM requires 2q space. Now, we just notice that in order to
simulate an NFA we do not need to know all of x but only the current
symbol of x. Thus we can reuse space. We can be more space
efficient by “guessing the symbols” of x as we go along. Let us
consider a new machine.

Sarmad Abbasi (Virtual University) Theory of Computation 31 / 65


Space Complexity

1 On input hAi a NTM.


2 Let q be the number of states in A.
3 For i = 1, . . . , 2q
4 Non-deterministically guess the i-th symbol of x
5 Simulate A by placing markers on the current states of A.
6 If A accepts x then accept.
7 Reject.

Sarmad Abbasi (Virtual University) Theory of Computation 32 / 65


Space Complexity

This NTM requires a counter that can count upto 2q . Such counter
requires q bits. Thus the whole simulation can be done in linear space.
Thus we have
Theorem
ENFA ∈ SPACE(n).

It is interesting to note that ENFA is not known to be in NP or co-NP.

Sarmad Abbasi (Virtual University) Theory of Computation 33 / 65


Space Complexity

Let us now look at another very interesting problem. We call this


reachability. Let G = (V , E) be a graph we say that a vertex t is
reachable from s if there is a path from s to t. How can we solve the
reachability problem.

Sarmad Abbasi (Virtual University) Theory of Computation 34 / 65


Space Complexity

Lets define

R = {hG, s, ti : G is a graph and s is reachable from t }.

Thus the computational question is

Given a graph G and two vertices s and t decide if there is a path from
s to t.

Sarmad Abbasi (Virtual University) Theory of Computation 35 / 65


Space Complexity

You learn two algorithms to solve this problem. One is called DFS and
the other is called BFS. Rough outline is the following:
1 Mark s
2 repeat n − 1 times.
3 if there is a edge from a marked vertex a to an unmarked
vertex b then mark b.
4 If t is marked accept. Else reject.

Sarmad Abbasi (Virtual University) Theory of Computation 36 / 65


Space Complexity

Usually the emphasis is on the time complexity of this algorithm and


DFS and BFS are elegant and efficient implementations of this idea
(with very nice properties). However, now we want to view these
algorithms from the point of view of space complexity.

Sarmad Abbasi (Virtual University) Theory of Computation 37 / 65


Space Complexity

The point to notice is that both these algorithm require us to mark the
vertices (which initially are all unmarked). Therefore, we need to have
one bit for each vertex and we set it when we mark that particular
vertex. Thus the whole algorithm takes linear space.

Sarmad Abbasi (Virtual University) Theory of Computation 38 / 65


Space Complexity

Question
Can reacability be solved in sublinear space?

In fact, as we will see little later. Reachability is one of the most


important problems in space complexity. Let us for now start with an
ingenious space efficient algorithm for reachability.

Sarmad Abbasi (Virtual University) Theory of Computation 39 / 65


Space Complexity

Let G = (V , E) be a graph on n vertices and a and b be two vertices.


We say that b is reachable from a via a path of length k if there exist
a = v0 , . . . , vk = b such that

(vi , vi+1 ) ∈ E

for all i = 0, . . . , k − 1.

Sarmad Abbasi (Virtual University) Theory of Computation 40 / 65


Space Complexity

Let us begin with the following facts:

Fact
If b is reachable from a via a path of length k ≥ 2 then there is a vertex
c such that:
1 c is reachable from a via a path of length dk /2e and
2 b is reachable from c via a path of length dk /2e.

Sarmad Abbasi (Virtual University) Theory of Computation 41 / 65


Space Complexity

Let us begin with the following facts:

Fact
If b is reachable from a then it is reachable via a path of length at most
n. Where n is the number of vertices in the graph.

Sarmad Abbasi (Virtual University) Theory of Computation 42 / 65


Space Complexity

Two other small facts are:


Fact
b is reachable from a via a path of length 0 if and only if b = a.

Fact
b is reachable from a via a path of length 1 if and only if (a, b) ∈ E.

Sarmad Abbasi (Virtual University) Theory of Computation 43 / 65


Space Complexity

Now, let us write a function reach(a,b,k). This function will return true if
b is reachable from a via a path of length at most k .

Sarmad Abbasi (Virtual University) Theory of Computation 44 / 65


Space Complexity

reach(a,b,k)
1 If k = 0 then
2 if a = b return true. Else return false.
3 If k = 1 then
4 if a = b or (a, b) ∈ E return true. Else return false.
5 For each vertex c ∈ V
6 if (reach(a,c,dk /2e ) and reach(c,b,dk /2e)
7 return true;
8 return false;
How much space does this program take?

Sarmad Abbasi (Virtual University) Theory of Computation 45 / 65


Space Complexity

We can solve the reachability problem by simply calling


reach(s, t, n)
But the question is how much space will that take?
This is a recursive program and we need to analyze the space
requirements of this recursive program. There are three parameters
that are passed and one local variable (c). These have to be put on the
stack. How much space do these variables take?

Sarmad Abbasi (Virtual University) Theory of Computation 46 / 65


Space Complexity

If the vertex set of the graph is V = {1, . . . , n} then each vertex can be
stored in log2 n bits. Similarly, the parameter k is a number between 1
and n and requires log2 n bits to store. Thus each call will require us to
store O(log n) space on the stack. So, the space requirements are

O(log n) × depth of the recursion.

Sarmad Abbasi (Virtual University) Theory of Computation 47 / 65


Space Complexity

The depth of the recursion is only dlog2 ne. Since, we start with n and
each recursive call reduces the it by a half. Hence the total space
requirements are
O(log2 n)

Sarmad Abbasi (Virtual University) Theory of Computation 48 / 65


Space Complexity

Theorem
reachability is in SPACE(log2 n).

Sarmad Abbasi (Virtual University) Theory of Computation 49 / 65


Space Complexity Classes

Let us prove a very simple relationship between time and space


complexity. Let us say that M is a deterministic TM that uses space
s(n). Can we give an upper bound on the time taken by the TM M?

Sarmad Abbasi (Virtual University) Theory of Computation 50 / 65


Space Complexity Classes

Let us count how many configurations of M are possible on an input of


length n. Note that we can specify a configuration by writing down the
tape contents of the work tape (there are at most s(n) cells used
there). We also have to specify the position of the two heads. Thus
there are
n × s(n) × bs(n)
configurations. Here b is the number of symbols in the work alphabet
of the machine.

Sarmad Abbasi (Virtual University) Theory of Computation 51 / 65


Space Complexity Classes

Theorem
If M runs in space s(n) then there are at most

2O(s(n))

distinct configurations of M on x.

We only have to note that

n × s(n) × bs(n) = 2O(s(n))

Sarmad Abbasi (Virtual University) Theory of Computation 52 / 65


Space Complexity Classes

Theorem
If M accepts an input in space s(n) then it accepts it in time

2O(s(n)) .
This theorem is true for both deterministic and non-deterministic TMs.

Sarmad Abbasi (Virtual University) Theory of Computation 53 / 65


Space Complexity Classes

Let us define the analogue of P and NP from space complexity point of


view:
[
PSPACE = SPACE(nk )
k

and [
NPSPACE = NSPACE(nk )
k

Sarmad Abbasi (Virtual University) Theory of Computation 54 / 65


Space Complexity Classes

The analogue of the P vs. NP question for space computation is the


following:

Question
Is
PSPACE = NPSPACE?

Sarmad Abbasi (Virtual University) Theory of Computation 55 / 65


Space Complexity Classes

Unfortunately there is no prize on this problem. This problem was


solved by Savitch. He proved

PSPACE = NPSPACE.
We will prove this theorem. This theorem is a consequence of the
following theorem.

Sarmad Abbasi (Virtual University) Theory of Computation 56 / 65


Space Complexity Classes

Theorem
Let f (n) ≥ n then

NSPACE(f (n)) ⊆ DSPACE(f 2 (n)).

Sarmad Abbasi (Virtual University) Theory of Computation 57 / 65


Space Complexity Classes

We want to show that if L is accepted by non-deterministic TM in space


O(f (n) then it can be accepted by a determinitic TM that uses space
O(f 2 (n)). The main idea is similar to the one that we used for
reachability.

Sarmad Abbasi (Virtual University) Theory of Computation 58 / 65


Space Complexity Classes

Let N be a non-deterministic TM that accepts a language L in space


f (n). We will build a deterministic TM M that accepts L in space f 2 (n).
M will simply check if an accepting configuration of N is reachable
from the starting configuration.

Sarmad Abbasi (Virtual University) Theory of Computation 59 / 65


Space Complexity Classes

Let us for a moment make the assumption that we can compute f (n)
easily (we will come back to this point). Let us now define a problem
called yeildability. Suppose we are given two configurations c1 and c2
of the NTM N and a number t. The question is

Is can c1 yeild c2 in t steps.

We further assume that N will use space f (n). Consider the following
TM (algorithm)

Sarmad Abbasi (Virtual University) Theory of Computation 60 / 65


Space Complexity

CANYEILD(c1 , c2 , t)
1 If t = 1 then
2 if c1 = c2 or c2 follows from c1 via rules of N return true. Else
return false.
3 For each configuration c 0 of length at most f (n)
4 if (CANYEILD(c1 , c 0 , dt/2e ) and CANYEILD(c 0 , c2 , dt/2e)
5 return true;
6 return false;

Sarmad Abbasi (Virtual University) Theory of Computation 61 / 65


Space Complexity

Once again this is a recursive program. So, we have to see how much
space does it require. It stores the configurations on the stack that
takes space
O(f (n)).

Sarmad Abbasi (Virtual University) Theory of Computation 62 / 65


Space Complexity

Furthermore, if we call CANYEILD(c1 , c2 , t) then the depth of the


recursion will be log2 t. Now, given a TM M and an input x let us
consider the following deterministic TM.

Sarmad Abbasi (Virtual University) Theory of Computation 63 / 65


Space Complexity

1 Input x.
2 compute f (n)
3 Let c0 be the initial configuration of M on x.
4 For all accepting configurations cf of length at most f (n)
5 if CANYEILD(c0 , cf , 2O(f (n)) ) accept.
6 reject.

Sarmad Abbasi (Virtual University) Theory of Computation 64 / 65


Space Complexity

It is clear that if M accepts x in space f (n) then the above algorithm


will also accept x. On the otherhand if M does not accept x in space
f (n) then the above algorithm will reject. The space required by the
above algorithm is
 
f (n) log 2O(f (n)) = O(f (n) × f (n)) = O(f 2 (n)).

Sarmad Abbasi (Virtual University) Theory of Computation 65 / 65


Space Complexity

This completes the proof of our theorem. There is one technicality left.
If you see the algorithm, we have said we will compute f (n). However,
we may not know how to compute f (n). In that case, what can we do?
This is just a technicality and we can add the assumption that f (n) is
easily computable to our theorem. Another way to get around it is as
follows:

Sarmad Abbasi (Virtual University) Theory of Computation 66 / 65


Space Complexity

Note that we can use the above function to compute if the initial
configuration yields a configuration of length f or not. We can start with
f = n and and increment its value till we find the length of the largest
configuration that we can get to from the initial configuration. We can
use that value of f now to see if M reaches an accepting configuration
of length f .

Sarmad Abbasi (Virtual University) Theory of Computation 67 / 65


Space Complexity

Note that we can use the above function to compute if the initial
configuration yields a configuration of length f or not. We can start with
f = n and and increment its value till we find the length of the largest
configuration that we can get to from the initial configuration. We can
use that value of f now to see if M reaches an accepting configuration
of length f .

Sarmad Abbasi (Virtual University) Theory of Computation 68 / 65


Space Complexity

The above theorem shows that

NSPACE(nk ) ⊆ DSPACE(n2k )

and therefore,
PSPACE = NPSPACE.

Sarmad Abbasi (Virtual University) Theory of Computation 69 / 65

You might also like