0% found this document useful (0 votes)
10 views10 pages

Space

Uploaded by

Thushara A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views10 pages

Space

Uploaded by

Thushara A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Space Complexity

1 Introduction
So far, we have used time as a resource that has to be optimized during computation. This week, we shall
study also time as a resource and the implications of the same. In our exploration, we still use the Turing
machine as our computational model. Our interest is to be able to define appropriate complexity classes
based on the space used. We shall also study how these complexity classes relate to complexity classes
defined with respect to time. We will study how the deterministic and the nondeterministic machines may
differ in their space complexity. Further, we can also see if there is a hierarchy of complexity classes with
respect to space.

1.1 Space
We start with the following definition. We consider TMs with three tapes, an input tape that is read-only, a
work-tape that is both read-write, and an output tape that is writable. The TM cannot write in the input tape
but can read any cell of this tape. The number of cells used by a TM is the number of cells written on the
work tape.

Definition 1.1 (DSpace) Let s : IN → IN and let L ⊆


zo. Let M be a deterministic Turing machine that decides L with the property that for any w ∈ {0, 1}∗ ,
M executing on input w uses at most c · s(|w|) tape cells, for a constant c before deciding whether w ∈ L.
Then, we say that L ∈ DSpace(s(n)).

Notice the similarity to that of deterministic time complexity that one can define. However, there is a
fundamental difference between the resources space and time. Unlike time, space can be reused and this
property has far reaching implications. Consider for example a TM that implements a counter for bit strings
of length n. While it has run for 2n time steps, it can run in a space of n. We first show the following
example and then generalize the same.

Example. 1.2 Recall the problem CLIQUE defined as:

CLIQUE = {hG, ki | G has a clique of size k}.

It is known that CLIQUE ∈ NP and also that CLIQUE ∈ NP-complete. Therefore, it is unlikely that
there will be a polynomial time algorithm for CLIQUE. However, a deterministic TM can be designed to
solve the CLIQUE problem using space in O(n) as follows.
Recall that each candidate solution to the CLIQUE problem can be represented as a subset of k vertices
out of the n vertices. Such a subset can be in turn represented as a bit string of length n. The TM M does
this by writing bit strings corresponding to k subsets of n in a lexicigraphic order. To verify if a particular
subset is a solution, the TM can verify in polynomial time if the vertices belonging to the subset are mutual
neighbors. In this case, the machine can accept the input. If all subsets fail to be valid solutions, the machine
rejects the input. The spae used by M on its work tape is in O(n).

1
Notice also how separating the input tape and the work tape is convenient. In the above example, it is
still true that the input can also be represented in polynomial space. As we define compelxity classes with
respect to space that are sub-linear, this difference becomes more important.
The above example is not an exception with respect to problems in NP. As the following theorem shows,
it appears that space is more powerful than time. Let PSPACE = ∪k∈IN DSpace(nk ).

Theorem 1.3 P ⊆ NP ⊆ PSPACE.

The proof of the above theorem is left as an assignment. The above proof can be shown by using the
following claim on the following notion of configuration graphs of TMs.
Let M be a TM that is either deterministic of nondeterministic. A configuration of M is a description of
the contents of all the non-blank cells of the work tape of M , along with the state of M and the position of its
head. Given a TM M and an input w to M , the configuration graph GM w , a directed graph, can be defined
as follows. The nodes in GM w are the configurations that M can reach from the starting configuration
with w on the input tape and the state being the start state, and the head at the first symbol in w. An edge
between configurations Cl and C2 exists in gmw iff from C1 the machine can reach C2 in one step. If M is
deterministic, then the outdegree of GM w is 1 and if M is nondeterministic, then without loss of generality
we can take that GM w has outdegree 2. Further, we can assume that M has only one accepting state, and
also one accepting configuration, Caccept . So, M accepts w iff there is a directed path from Cstart to Caccept
in GM w . The following claim is easy to show.

Claim 1.4 Let M be a machine that decides some language L ∈ {0, 1}∗ using space s(n). Let GM w be the
configuration graph of M on input w. Then,

1. GM w has 2O(s(n) ) nodes.

2. Two configurations C1 and C2 are neighbors in GM w if and only if there exists a Boolean formula of
size s(n) in CNF that is satisfiable.

Proof. A basic counting argument establishes item (i) as follows. A configuration is completely determined
by the symbols on the tape, the position of the head, and the state of the machine. If the TM uses at most
s(n) tape cells, then the number of different ways the symbols can be arranged on the tape is 2s(n) , assuming
that the alphabet is {0, 1}. The number of states |Q| can be taken to be a constant, and the position of the
head has at most s(n) choices. Put together, the total number of nodes in the graph is O(2s(n) ).
For item (ii), this observation is the essence of the NP-completeness of SAT as shown by Cook in his
famous theorem. Essentially, we can use a sequence of Boolean And operators to check if two configurations
are neighbors in the above graph. Each such Boolean formula has a constant size, constant based on the
number of states and the size of the alphabet. ⊓

To continue further, let us see how DTMs and NTMs can differ in their space usage. In this direction,
we first extend Definition as follows.

Definition 1.5 (NSpace) Let s : IN → IN and let L ⊆ {0, 1}∗ . Let M be a nondeterministic Turing machine
that decides L with the property that for any w ∈ {0, 1}∗ , M executing on input w uses at most c · s(|w|)
tape cells, for a constant c before deciding whether w ∈ L. Then, we say that L ∈ NSpace(s(n)).

Recall that simulating a computation of an NTM on a DTM seems to require an exponential increase
in time. For instance, an NTM running in t(n) time can be simultaed by a DTM in time O(2t(n) ). In a
remarkable result, Savitch showed that however deterministic TMs can simulate nondeterministic TMs with
a very small space overhead.

2
Theorem 1.6 (Savitch) Let s : IN → IN be such that s(n) ≥ n. Then,

NSpace(s(n)) ⊆ DSpace(s(n)2 )

Proof. To think about the proof, here is an idea. We need to show that a deterministic TM can simulate the
actions of a non-deterministic TM that uses a space of s(n). Using Claim ?? is a possibility. We need to
check in space s(n)2 if the starting configuration leads to the accepting configuration. The number of nodes
in the graph is indeed 2O(s(n) ). But given that the outdegree is 2, the number of paths in the graph is in
O(s(n))
O(22 ). If we have to systematically explore all these paths, to represent a path by its number, we need
a space of O(2s(n) ), which is much more than O(s(n)2 ) as promised by the theorem.
So we need a new trick here. Indeed, counting all possible paths is rather naive. There are several
simple graphs with polynomial number of nodes but exponential number of paths. [[Create one such graph
to convince yourself.]] The trick is to think of the graph reachability problem which need not be solved in
such a brute-force manner.
The trick is to therefore mimic the graph reachability solution by using as little space as possible. The
details follow. Let C1 and C2 be two configurations and let the predicate Reach(C1 , C2 , t) return Yes or
No depending on whether from configuration C1 there is path to configuration C2 in at most t steps. The
predicate Reach can be used recursivley starting from Reach(C1 , Caccept , t), where t = 2cs(n) ) for a
suitable constant c. [[Why should t be so high?]]. The predicate can be answered recursively as follows.
To decide Reach(C1 , C2 , t) one can recursively check if there exists a configuration Cmid such that both
Reach(C1 , Cmid , t/2) and Reach(Cmid , C2 , t/2) are both true. (In the above, we can take that t is an exact
power of 2, or consider ceil(t/2).) The recursive program can be written as shown below:

Algorithm Reach(C1 , C2 , t)
begin
If t = 1 then
Check if either C1 = C2 or C1 and C2 are neighboring configurations.
If so, return True, else return False;
end-if
else
For each configuration Cmid do
result1 = Reach(C1 , Cmid , t/2);
result2 = Reach(Cmid , C2 , t/2);
return result1 and result2;
end-for
End-if
End-Algorithm

The determinstic machine M that simulates the s(n)-space nondeterministic machine N simply imple-
ments the above program. Let us now see how much space M needs for this simulation. For each recursive
call, M needs to store three variables namely, C1 , C2 , and t. Since each is in 2O(s(n)) , each of them can be
stored by using O(s(n)) bits. There are also log 2O(s(n)) stages in the recursion. So, the overall space used
by M is in O(s(n)2 ). ⊓

An immediate implication of Svaitch;s theorem is the following corollary.

Corollary 1.7
P ⊆ NP ⊆ PSPACE = NSpace ⊆ EXPTIME.

3
Figure 1: Containments among complexity classes, as is widely beleived.

In the above list of containments, we do not know if any of the containments are proper. It is possible
to show that P 6= EXPTIME. Therefore, in the list of containments, one more of them has to be proper.
A picture of the above containments, assuming that all are proper except the one indicated to be equal, is
shown in Figure 1. It is widely beleived that the situation is as shown in Figure 1.

4
2 PSPACE and PSPACE-Completeness
A natural question to ask when we define a new complexity class is to seek problems that are complete
with respect to that class. Recall the class PSPACE of problems. We now seek the notion of PSPACE-
completeness for the class PSPACE.

Definition 2.1 A language L is said to be PSPACE-complete if:

• L is in PSPACE, and

• Every language L′ in PSPACE is polynomial time reducible to L.

Notice the limitation on the reduction in the above definition. We are following the general rule that
the comptuational class the reduction function belongs to must be weaker than the class for which we want
to establish reduction. Analogous to the satisfiability problem for NP-completeness, we have a quantied
formual satisfiability that can be shown to be PSPACE-complete.
Quantified Boolean formulae can be defined as follows. Recall the universal quantifiers (∀) and the
existential quantifier (∃). Certain mathematical statements require the use of quantifiers to indicate the
scope of applicability of the statement. For instance, when one considers natural numbers, the statement
x2 + y 2 = z 2 is true only some triples x, y, and z. So, one can write the above as ∃x∃y∃z x2 + y 2 = z 2 .
Such a statement is called a quanitified formula. Another example is the following: ∀xh∃y∃d x = y · di ∧
h∃z∃e x = z · e. What does the above statement capture? Is it true? Assume that x, y, and z are positive
integers.
The difference between the above two examples is that in the former all quantifiers appear at the be-
ginning of the statement. Such a statement is called to be prenex normal form. When we consider that the
variables are from {0, 1} then quantified statements in prenex normal form are called quantified Boolean
statement. Further, when all variables have some quantifier associated with the varaible, then the statement
is called as a fully qualified. We define the language T QBF as:

T QBF = {Φ|Φ is a true fully quantifiable Boolean formula}.

We show now that T QBF is PSPACE-complete.

Theorem 2.2 TQBF is PSPACE-complete.

Proof. As usual there are two items to show. One is to show that TQBF is in PSPACE and the second is to
show that every other language in PSPACE is reducible to TQBF.
For the first item, consider a TQBF formula Φ. A deterministic TM for checking whether Φ ∈ T QBF
can be designed as follows. There are at most a polynomial number of variables in Φ. For each variable, there
are only two possible values: 0 and 1. The TM can systematically explore all possible values for all variables.
For each such assignment, the TM checks if Φ is true. The TM accepts if there is an assignment under which
Φ is true. Otherwise the TM rejects. Since the space used for writing the current assignment can be reused,
the total space used by the TM is polynomial in the length of the input. Hence, T QBF ∈ PSPACE.
The second item is more involved. Let L be a PSPACE language and let M be a TM that recognizes L.
We wish to show that the action of M on an input w can be coded into a fully quantified Boolean formula Φ
so that Φ is true iff w ∈ L.
Notice the similarity between the present problem and that of reducing any language L in NP-complete
to SAT. Briefly, the proof of the latter constructs a Boolean formula that is satisfiable iff the machine for L
accepts an input w.

5
One can check that the above approach fails for the PSPACE machine M for the following reason.
M , being bound by polynomial space can run for an exponential amount of steps. Therefore, the Boolean
formula that represents the actions of M on input w can be exponentially long. The reduction however is
limited to be polynomial in time. So, such a reduction would not work.
Let us see if the configuration graph we defined for M on input w, GM,w helps. One can use the proof
technique of Savitch’s theorem to say that M accepts w iff there exists a configuration Cmid so that the
formula corresponding to [Cstart can reach Cmid in t/2 steps] and the formula corresponding to [Cmid can
reach Caccept in t/2 steps] are both true. Let us represent the above by saying that Φc1 ,c2,t corresponds to the
Boolean formula with c1 as the starting configuration and c2 as the ending configuration and t is the time
allowed for reaching from c1 to c2 . The formula Φc1 ,c2 ,t can be constructed recursively as:

Φc1 ,c2,t = ∃c′ Φc1 ,c′ ,t/2 ∧ Φc′ ,c2,t/2

For t = 1, we just have to check if the configurations in Φ are neighboring configurations in GM,w .
While the above is technically correct, there is one problem.
k
The length of the expression ΦCstart ,Caccept ,t for t ∈ 2O(n ) can be super-polynomial in length. The
recursion is controlled by the parameter t and every recursive call reduces t to t/2, but also doubles the size
of the formula. So, the formula ends up having an exponential length as t can be exponential. The reduction
cannot write such a long formula.
We therefore need another method to encode the actions of M into a polynomial sized fully quantified
Boolean formula. This can be done as follows. The trick is to not double the length of the formula every
recursive step. This is done by redefining Φc1 ,c2,t as:

Φc1 ,c2,t := ∃Cmid ∀(c, d) ∈ {(Cstart , Cmid ), (Cmid , Caccept )} hΦc,d,t/2 i.

While this does not increase the size of the Boolean formula every recursive step, the scope of the
variables c and d is no longer Boolean. However, that can be easily fixed by writing the subexpression
∀(c, d) ∈ {(Cstart , Cmid ) hΦc,d,t/2 i as (c, d) = (Cstart , Cmid ) ∩ (c, d) = (Cmid , Cstart ) → hΦc,d,t/2 i. This
way, the length of the formula ΦCstart ,Caccept ,t is polynomial in |w|. ⊓

An important observation regarding languages in PSPACE is that winning strategies for most games can
be captured using quantified Boolean formulae. Most board games can thus be shown to be PSPACE-hard.

6
3 Sub-linear Space
In this section, we explore problems that can be solved by using space is that less than the space consumed
by the input itself. Hence, if n denotes the size of the input, we are interested in problems that can be solved
in sublinear space. Notice that the machine does have enough time to actually read the entire input while still
working in sublinear space. So this notion is not ill-defined. Interestingly however, there is an equivalent
notion with respect to time called sub-linear algorithms which were studied recently in the works of Indyk.
While we may not have time to study some of the sublinear time works, plesae see [] for more details.
Given that we are consdiering sublinear space, what are the right sublinear space functions that are

interesting? Should we seek problems that can be solved in O( n)-space?, O(nǫ )-space for some constant
ǫ < 1?, or should we investigate O(log n)-space?, or further small space such as O(log log n)-space? It turns
that logarithmic space is a good candidate to study as it provides some natural intuitions to think about. In
logarithmic space, one can essentially have space just enough to store a constant number of pointers into the
input! Still it turns out that there is a class of interesting problems that can be solved in logarithmic space.
Further, this class of functions is invariant to simple changes to the Turing machine model such as addingg
more tapes, or increasing the size of the alphabet, etc. Therefore, studying logarithmic space is interesting.

Definition 3.1 We define LogSpace to be the class of languages that can be decided by a deterministic
Turing machine M using at most O(log n) cells of the tape where n is the size of the input. In other words,

LogSpace = DSpace(log n)

The following example shows a simple language that can be decided in logarithmic space.

Example. 3.2 Let L be the language {hGi|G has no triangle}. Then L can be recognized in LogSpace as
follows. On the work tape, the machine M can write all 3 tuples of the vertices of G. For each tuple written,
the machine checks if there is a trinalge passing through those vertices. If all tests fail, then M accepts,
otherwise M rejects. The space used by M is space enoug to store three vertex identifiers, therefore L is in
LogSpace.

Analogous to the definition of NSpace, also LogSpace has a nondeterministic equivalent.

Definition 3.3 We define NLogSpace to be the class of languages that can be decided by a nondeterministic
Turing machine M using at most O(log n) cells of the tape where n is the size of the input. In other words,

NLogSpace = NSpace(log n)

The following example illustrates a problem that can be solved nondeterministically using logarithmic
space.

Example. 3.4 Extend the above example to define the language L as the language {hGi|G is bipartite}.
It can be shown that L is in NLogSpace as follows. The nondeterminitic machine guesses an odd integer k
such that 1 ≤ k ≤ n where G has n vertices. For this value of k it starts by guessing k vertices that form a
cycle in G. Notice that the k vertices need not be stored all at the same time. Only neighbouring vertices on
the cycle need be stored on the tape. Therefore L ∈ NLogSpace.

Example. 3.5 Let PATH be defined as the language {hG, u, vi | G is a directed graph and u v}. PATH
essentially tries to see if the given directed graph G has a path from a vertex u to vertex v in the graph G.
Recall that there are standard algorithms that run in polynomial time and space that can decide the PATH
problem. The challenge however is to show that indeed a small space suffices.

7
A nondeterministic machine M that decides PATH can be designed as follows. On its work tape, M
guesses the next node on a path from u to v. M starts by guessing one of the out-neighbors of u, say u1 . M
writes u1 on its work tape. It then guesses an outneighbor of u1 and writes the id of this node on the work
tape in the same space u1 is written. If M is able to reach v via these guesses, then M accepts. If M makes
more than n = |V (G)| gusses, then M rejects the input.
Notice that M does not keep the history of its guesses; M just has to decide to accept or reject and need
not show a path in case of acceptance.

The above example is rather remarkable. It turns out that if G is an undirected graph, then nondetermin-
ism is not required. The undirected version of the above problem can be solved using logarithmic space by
a deterministic machine.

3.1 Savitch’s Theorem for LogSpace and NLogSpace


Recall from Savitch’s theorem that for space beyond linear, it holds that NSpace(s(n)) = DSpace(s(n)2 ).
In this section, we argue that a similar result holds also when s(n) ≥ log n. One of the important pieces in
the proof of Savitch’s theorem is the relation between the space used by a TM and the time the TM takes
to decide on any input. Specifically, we showed that an s(n)-space bounded TM runs in time O(2s(n) ). Is
this relation still true for logarithmic space TMs? Not so at a first glance. A TM running in O(1) space can
just read the entire input, and do nothing with it thereby consuming O(n) time. We therefore introduce the
following definition that relates the space and time for every function s(n).

Definition 3.6 Consider a multitape TM M with a separate read-only input tape. Let w be an input to M .
A configuration of M on w consists of the state of M , the contents on its work tape, and the positions of its
heads, including the head of the read-only tape.

The essence of this definition is to separate the input from the configuration of a TM. This is not a
problem as the input is read-only, and the position of the head is included in the configuration.
Using the above definition, it now holds that if a TM runs in space s(n), then on an input w of length
|w|, the number of configurations is in O(n · 2s(n) ). How?
Finally, Savitch’s theorem can be proved using the above number of configurations. The configuration
graph again has n2O(s(n)) nodes and we have solve a reachability problem. Storing the parameters for every
recursive call now requires a space of log(n2O(s(n)) ) = log n+O(s(n)). Therefore, so long as s(n) ≥ log n,
Savitch’s theorem holds.

3.2 NLogSpace-Completeness
Analogous to the definition of time based complexity classes P and NP, we have now defined logarithmic
space based complexity classes LogSpace and NLogSpace. So, it is natural to ask the relation between the
classes LogSpace and NLogSpace. Clearly, LogSpace ⊆ NLogSpace, and the question therefore is whether
LogSpace = NLogSpace or not. This sounds very similar to the P versus NP question. To create evidence
that P is not equal to NP, there is the notion of NP-complete that shows that certain problems in NP are
not likely to be in P. In the same flavour, one can build evidence to show that LogSpace 6= NLogSpace by
searching for complete problems with respect to NLogSpace.
As with the general definition of a problem complete for a class, an NLogSpace-complete problem
represents a problem that is most difficult in the class NLogSpace. Therefore, a problem A is NLogSpace
complete if it is in NLogSpace and every other problem in NLogSpace reduces to A. What has to be specified
in the above notion is the time allowed for the reduction function. If we use polynomial reducibility, then
there is a small inconsistency. All problems in NLogSpace are solvable in polynomial time. So, every

8
pair of problems in NLogSpace, except the pair Φ and Σ∗ are reducible to each other. This suggests that
polynomial reducibility is too powerful a notion in the case of NLogSpace completeness. (In general, when
defining complete problems with respect to a class, the power allowed to the reduction must be smaller than
the resources sufficient to decide the class itself). We therefore introduce the following notion of log space
reducibility.
Definition 3.7 A language L1 is log space reducible to a language B, written A ≤L B if positive instances
of A can be converted to positive instances of B using a function f : Σ∗ → Σ∗ where f is a log space
computable function. A log space computable function is a function such that there is a Turing machine
M with a read-only input tape, a O(log n) long work tape, and a write-only output tape, which on input
w ∈ Σ∗ on the input tape computes f (w) and writes f (w) on the output tape.
Using the above definition, a problem L1 is NLogSpace complete if L1 ∈ NLogSpace and every problem
in NLogSpace is log space reducible to L1 . Standard results of the above definition that follow are given
below.
Theorem 3.8 The following are both true.
• If A ≤L B and B ∈ LogSpace then also A ∈ LogSpace.
• If any NLogSpace-complete language is in LogSpace then LogSpace = NLogSpace.
To end this section, we finally show that the PATH problem defined earlier is NLogSpace-complete.
Theorem 3.9 PATH is NLogSpace-complete.
Proof. There are two items to show. Firstly, we need to show that PATH ∈ NLogSpace. But that is
already done in the example. Secondly, we need to show that every other problem in NLogSpace is log
space reducible to PATH. For this, we need to exhibit a log space reduction that converts positive instances
of any problem in NLogSpace to positive instances of PATH. The reduction works as follows.
Imagine for a moment that there is no restriction on the space used for the reduction. Let us understand
how to reduce any problem in NLogSpace to the PATH problem. Let B ∈ NLogSpace and let M be a
nondeterministic TM that deciedes B. One can think of constructing a graph GM,w for input w such that
nodes in GM,w correspond to the configurations of the machine M on w. Now, if there is a path in GM,w
from the start configuration to the accepting configuration, as an instance of PATH, then w is accepted by
M.
Let us now see how this graph can be constructed using only logarithmic space. The nodes of GM,w
are the configurations of M on input w. There are O(nk ) nodes in this graph as there are only n2O(log n)
configurations. An edge exists in GM,w from configuration C1 to C2 iff either C1 = C2 or C2 is reached in
one step of the machine from C1 . This can be checked given the definition of M .
To compute the graph in logarithmic space, one can proceed as follows. Notice that only the work tape
is bounded by logarithmic space, but not the output tape. To describe a graph, one needs to simply describe
its nodes and edges. The nodes can be listed in a lexicogrpahic order as follows. Each node in GM,w can be
represented in O(log n) bits. So, start by listing all bit strings of length O(log n) on the work tape. For each
bit string b, check if b is a valid configuration of M on input w. If so, write b on the output tape. Write the
string next to b in the lexicographic order on the work tape, and repeat until all bit strings of length O(log n)
are checked. To list the edges, a similar technique can be used. Now, we list pairs of bit strings of length
O(log n) in lexicographic order on the work tape. We check if the current pair is a valid edge, and if so,
write the edge on to the output tape. This check can be done by using the transition function of M . ⊓

An immediate corollary of the above theorem is the following.


Corollary 3.10 NLogSpace ⊆ P.

9
3.3 The Class coNLogSpace
We now turn our attention to another space based complexity class along with a surprising result. The class
coNLogSpace is the set of languages whose complement can be decided by a nondeterministic Turing ma-
chine using logarithmic space. The class coNLogSpace can be seen as analogous to the class coNP. It is gen-
erally beleived that NP 6= coNP. However, as the following theorem shows, NLogSpace = coNLogSpace.

Theorem 3.11 NLogSpace= coNLogSpace

Proof. We have to show that every problem in coNLogSpace is also in NLogSpace. Here is where complete
problems come to our rescue. Recall that PATH is NLogSpace-complete. So, if PATH were shown to be
in NLogSpace, then also every problem in coNLogSpace would be in NLogSpace. The language PATH is
the set of directed graphs with two vertices u and v such that there is no path from u to v. We now show a
nondeterministic loagrithmic space machine for PATH.
The nondeterministic machine should somehow conclude that no path exists between two vertices u and
v in a given directed graph. Let us ignore the space aspect for a moment. Then, one way of doing this is
as follows. The nondeterministic machine tries to partition the vertex set of the given graph into two parts
VR and VN R . VR contains the set of vertices that are reachable from s and VN R contains the set of vertices
that are not reachable from s. If v ∈ VN R then M accepts and if v ∈ NR then M rejects. To see whether a
vertex x is to be placed in VR , M can check using nondeterminism if there is a path from u to x. This can
be done also in logarithmic space. If such a verification fails, then it means that the vertex x should be in
VN R . Once vertex t is characterized, the machine can decide accordingly.
In the above, we are storing the n bits to indicate whether a vertex is in VR or its complement. This is
space more than what we can afford to use. To reduce space, we not the following observations. Firstly, we
need not know for every vertex x if x is in VR or not. It suffices if we can match up the number of vertices
in VR . How do we know |VR |? Assume so for a moment that we know |VR |. Then, convince yourself that
the machine M can run in logarithmic space using nondeterminism.
To find |VR | we proceed as follows. VR can be written as the union of sets VRi where VRi is the set of
vertices that are at a shortest distance of i from u. Note that VR0 = {u}. And, VRi+1 can be computed from
VRi as follows. For each vertex x, it is verified if the x is in VRi . (Note: We cannot store this information). If
so, then it checks if for every vertex y, (x, y) is an edge in G. If so, then y is in VRi+1 . This is similar to a
space reduced version of BFS. ⊓

10

You might also like