Theory of Computation A
Theory of Computation A
Contents
1 Introduction
1.1 Purpose and motivation
1.1.1 Complexity theory
1.1.2 Computability theory
1.1.3 Automata theory
1.1.4 This course
1.2 Mathematical preliminaries
1.3 Proof techniques
1.3.1 Direct proofs
1.3.2 Constructive proofs
1.3.3 Nonconstructive proofs
1.3.4 Proofs by contradiction
1.3.5 The pigeonhole principle
1.3.6 Proofs by induction
1.3.7 More examples of proofs
1
2.5.1 An example
2.6 Closure under the regular operations
2.7 Regular expressions
2.8 Equivalence of regular expressions and regular languages
2.8.1 Every regular expression describes a regular language
2.8.2 Converting a DFA to a regular expression
2.9 The pumping lemma and nonregular languages
2.9.1 Applications of the pumping lemma
2.10 Higman’s Theorem
2.10.1 Dickson’s Theorem
2.10.2 Proof of Higman’s Theorem
3 Context-Free Languages
3.1 Context-free grammars
3.2 Examples of context-free grammars
3.2.1 Properly nested parentheses
3.2.2 A context-free grammar for a nonregular language
3.2.3 A context-free grammar for the complement of a non-regular
language
3.2.4 A context-free grammar that verifies additional
3.3 Regular languages are context-free
3.3.1 An example
3.4 Chomsky normal form
3.4.1 An example
3.5 Pushdown automata
3.6 Examples of pushdown autom
3.6.1 Properly nested parentheses
3.6.2 Strings of the form 0n1n
3.6.3 Strings with b in the middle
3.7 Equivalence of pushdown automata and context-free grammars
3.8 The pumping lemma for context-free languages
3.8.1 Proof of the pumping lemma
3.8.2 Applications of the pumping lemma
2
4.1 Definition of a Turing machine
4.2 Examples of Turing machines
4.2.1 Accepting palindromes using one tape
4.2.2 Accepting palindromes using two tapes
4.2.3 Accepting anbncn using one tape
4.2.4 Accepting anbncn using tape alphabet {a,b,c}
4.2.5 Accepting mncmn using one tape
4.3 Multi-tape Turing machines
4.4 The Church-Turing Thesis
5 Complexity Theory
5.1 The running time of algorithms
5.2 The complexity class P
5.2.1 Some examples
6 Summary
References
3
Chapter 1
Introduction
The primary inquiry posed around there is "What makes a few issues
computationally hard and different issues simple?"
4
Casually, an issue is classified "simple", on the off chance that it is
proficiently feasible. Instances of "simple" issues are (I) arranging a
succession of, state, 1,000,000 numbers, (ii) looking for a name in a phone
index, and (iii) registering the quickest method to drive from Ottawa to
Miami. Then again, an issue is designated "hard", on the off chance that it
can't be tackled proficiently, or in the event that we don't know whether
it very well may be illuminated effectively. Instances of "difficult" issues
are (I) time table planning for all courses at Carleton, (ii) calculating a 300-
digit whole number into its prime elements, and (iii) registering a design
for contributing VLSI.
In the 1930's, Go¨del, Turing, and Church found that a portion of the
essential numerical issues can't be fathomed by a "PC". (This may sound
unusual, in light of the fact that PCs were designed distinctly in the
1940's). A case of such an issue "Is a self-assertive numerical explanation
valid or bogus?" To handle such an issue, we need formal implications of
the contemplations of
● PC,
● calculation, and
● computation.
5
● Context-Free Grammars. These are utilized to characterize
programming dialects and in Artificial Intelligence.
In this course, we will contemplate the last two regions in turn around
request: We will begin with Automata Theory, trailed by Computability
Theory. The primary zone, Complexity Theory, will be canvassed in
COMP 3804.
All things considered, before we start, we will audit some numerical proof
procedures. As you may figure, this is a genuinely hypothetical course,
with bunches of definitions, hypotheses, and proofs. You may figure this
course is fun stuff for math sweethearts, however exhausting and
immaterial for other people. You got it wrong, and here are the reasons:
6
4. This course causes you to learn critical thinking abilities. Theory
teaches you how to think, demonstrate, contend, tackle issues,
express, and are unique.
All through this course, we will expect that you know the accompanying
mathematical ideas:
7
6. If An and B are sets, at that point A will be a subset of B, composed
as A, B, if each component of An is likewise a component of B. For
instance, the arrangement of even normal numbers is a subset of the
arrangement of every characteristic number. Each set every setA is
a subset of itself, i.e.,A, i.e., ∅ ⊆ A. A ⊆ A. The unfilled set is a subset
of
P(B) = {A : A ⊆ B}.
See that ∅ ∈ P(B) and B ∈ P(B).
A ∪ B = {x : x ∈ An or x ∈ B},
A ∩ B = {x : x ∈ An and x ∈ B},
A \ B = {x : x ∈ An and x 6∈ B},
A = {x : x 6∈ A}.
8
9. A paired connection on two sets A and B is a subset of A × B.
R, having the property that for every component one arranged pair
in R, whose first segment is that f(a) = b, or f maps a to b, or the
picture of an under f is b. The set An is known as the space of f, and
the set
(b) R is symmetric: For every one of the an and b in A, in the event that
(a,b) ∈ R, at that point additionally (b,a) ∈ R.
(c) R is transitive: For each of the a, b, and c in A, in the event that (a,b)
∈ R and (b,c) ∈ R, at that point likewise (a,c) ∈ R.
9
edges. The figure underneath shows some notable charts: K5 (the
total diagram on five vertices), K3,3 (the total bipartite diagram on
2 × 3 = 6 vertices), and the Peterson diagram.
10
17. The Boolean qualities are 1 and 0, that speak to valid and bogus,
individually. The essential Boolean activities incorporate
11
proof is by taking care of an enormous number of issues. Here are some
helpful hints. (You may investigate the book How to Solve It, by G. Polya).
4. Try to record the proof once you have it. This is to guarantee the
correctness of your proof. Often, botches are found at the hour of
composing.
12
1.3.1 Direct proofs
Since 2(2k2 + 2k) is even, and "even in addition to one is odd", we can
infer that n2 is odd.
Theorem 1.3.2 Let G = (V,E) be a diagram. At that point the amount of the
levels of all vertices is an even whole number, i.e., is even.
Proof. On the off chance that you don't see the importance of this
assertion, at that point first give it a shot for a couple of charts. The
motivation behind why the assertion holds is extremely basic: Each edge
contributes 2 to the summation (on the grounds that an edge is an episode
on precisely two particular vertices).
Theorem 1.3.3 Let G = (V,E) be a diagram. At that point the amount of the
levels of all vertices is equivalent to double the quantity of edges, i.e.,
13
1.3.2 Constructive proofs
This procedure not just shows the presence of a specific article, it really
gives a strategy for making it. Here is the way a valuable proof resembles:
What's more, here is the proof that the article fulfills property P:
Proof. Characterize
V = {0,1,2,...,n − 1},
also,
14
1.3.3 Nonconstructive proofs
Theorem 1.3.6 There exist nonsensical numbers x and y with the end goal
that xy is level headed.
Case 1: √2√2 ∈ Q.
For this situation, we take x = y = √2. In Theorem 1.3.9 beneath, we will
demonstrate that √2 is silly.
Case 2: √2√2 ∈ Q
For this situation, we take x =√2√2 and y = √2. Since
xy = (√2√2)√2 = √22 = 2,
See that this proof without a doubt demonstrates the Theorem, however
it doesn't give a case of a couple of unreasonable numbers x and y to such
an extent that xy is sound.
15
Proof. Accept that assertion S is bogus. At that point, determine an
inconsistency, (for example, 1 + 1 = 3).
All in all, show that the assertion "¬S ⇒ bogus" is valid. This is adequate,
in light of the fact that the contrapositive of the assertion "¬S ⇒ bogus" is
the assertion "valid ⇒ S". The last consistent equation is equal to S, and
that is the thing that we needed to show.
16
We have indicated that m and n are both even. Yet, we realize that m and
n are not both even. Consequently, we have a logical inconsistency. Our
presumption that √2 is sound isn't right. In this way, we can reason that
√2 is silly.
The pigeon hole principle: If n + at least 1 items are set into n boxes, at
that point there is at any rate one box containing at least two articles. All
in all, on the off chance that An and B are two sets with the end goal that
|A| > |B|, at that point there is nobody to-one capacity from A to B.
17
longest expanding aftereffect that begins at ai, and let deci indicate the
length of the longest diminishing aftereffect that begins at a i.
(inci,deci) = (incj,decj).
First expect that ai < aj. At that point the length of the longest expanding
aftereffect beginning at ai must be in any event 1+inch, on the grounds
that we can annex it to the longest expanding aftereffect beginning at a j.
Hence, inci =6 inch, which is an inconsistency.
The subsequent case is when ai > aj. At that point the length of the longest
diminishing aftereffect beginning at ai must be at any rate 1+deck, on the
grounds that we can affix ai to the longest diminishing aftereffect
beginning at aj. Accordingly, deci =6 deck, which is again an
inconsistency.
18
1.3.6 Proofs by acceptance
Enlistment step: Prove that for all n ≥ 1, the accompanying holds: If P(n)
is valid, at that point P(n + 1) is likewise evident.
Proof. We start with the premise of the enlistment. In the event that n = 1,
at that point the left-hand side is equivalent to 1, as is the right-hand side.
So the Theorem is valid for n = 1.
For the enlistment step, let n ≥ 1 and accept that the Theorem is valid for
n, i.e., expect that
19
Incidentally, here is an elective proof of the Theorem above: Let S = 1 + 2
+ 3 + ... + n. At that point,
Since there are n terms on the right-hand side, we have 2S = n(n+1). This
infers that S = n(n + 1)/2.
20
21
Chapter 2
In this part, we present and dissect the class of dialects that are known as
customary dialects. Casually, these dialects can be "prepared" by PCs
having a modest quantity of memory.
At the point when a vehicle shows up at the cost door, the entryway is
shut. The door opens when the driver has paid 25 pennies. We expect that
we have just three coin groups: 5, 10, and 25 pennies. We likewise expect
that no abundance change is returned.
● The machine is in state q0, in the event that it has not gathered any
cash yet.
● The machine is in state q1, in the event that it has gathered precisely
5 pennies.
22
● The machine is in state q2, in the event that it has gathered precisely
10 pennies.
● The machine is in state q3, on the off chance that it has gathered
precisely 15 pennies.
● The machine is in state q4, on the off chance that it has gathered
precisely 20 pennies.
The figure underneath speaks to the conduct of the machine for all
potential arrangements of coins. State q5 is spoken to by two circles, since
it is an extraordinary state: As soon as the machine arrives at this express,
the entryway opens.
See that the machine (or PC) just needs to recollect which state it is in at
some random time. Subsequently, it needs just a limited quantity of
23
memory: It must have the option to recognize any of six potential cases
and, thusly, it just necessities a memory of ⌈ log6⌉ = 3 bits.
● After having perused the initial 1, the machine changes from state
q1 to state q2.
● After having perused the initial 0, the machine changes from state
q2 to state q3.
24
● After having perused the third 1, the machine changes from state q3
to state q2.
After the whole string 1101 has been prepared, the machine is in state q2,
which is an acknowledged state. We state that the string 1101 is
acknowledged by the machine.
Consider now the info string 0101010. In the wake of having perused this
string from left to right (beginning in the beginning state q1), the machine
is in state q3. Since q3 isn't an acknowledged state, we state that the
machine dismisses the string 0101010.
We trust you can see that this machine acknowledges each twofold string
that closes with a 1. Truth be told, the machine acknowledges more
strings:
● Every paired string having the property that there are a significant
number of 0s after the furthest right 1, is acknowledged by this
machine.
25
5. F is a subset of Q; the components of F are called acknowledge
states.
You can think about the change work δ similar to the "program" of the
limited machine M = (Q,Σ,δ,q,F). This capacity mentions to us what M can
do in "one stage":
The "PC" that we planned in the cost door model in Section 2.1 is a limited
machine. For this model, we have Q = {q0,q1,q2,q3,q4,q5}, Σ = {5,10,25},
the beginning state is q0, F = {q5}, and δ is given by the accompanying
table:
The model given in the start of this part is likewise a limited robot. For
this model, we have Q = {q1,q2,q3}, Σ = {0,1}, the beginning state is q1, F
= {q2}, and δ is given by the accompanying table:
26
We currently give a conventional meaning of the language of a limited
robot:
● r0 = q,
● ri+1 = δ(ri,wi+1), for I = 0,1,...,n − 1.
27
all strings over the letter set Σ. (Σ∗ incorporates the vacant string ǫ .) We
stretch out the capacity δ to a capacity
δ : Q × Σ∗ → Q,
that is characterized as follows. For any state r ∈ Q and for any string w
over the letter set Σ,
Let
A = {w : w is a paired string containing an odd number of 1s}.
How to develop M? Here is a first thought: The limited robot peruses the
info string w from left to right and monitors the quantity of 1s it has seen.
In the wake of having perused the whole string w, it checks whether this
number is odd (in which case w is acknowledged) or even (in which case
28
w is dismissed). Utilizing this methodology, the limited machine needs a
state for each whole number I ≥ 0, demonstrating that the quantity of 1s
read so far is equivalent to I. Henceforth, to plan a limited robot that
follows this methodology, we need an endless number of states. In any
case, the meaning of limited machine requires the quantity of states to be
limited.
● The set of states is Q = {qe,qo}. In the event that the limited robot is
in state qe, at that point it has pursued a much larger number of 1s;
in the event that it is in state qo, at that point it has pursued an odd
number of 1s.
● The start state is qe, in light of the fact that toward the beginning,
the quantity of 1s read by the robot is equivalent to 0, and 0 is even.
29
We have developed a limited robot M that acknowledges the languageA.
Accordingly, A will be an ordinary language.
★ On the other hand, on the off chance that the following image
is 0, at that point it changes to the state "perhaps the following
image is 1".
30
○ If the following image is in reality 1, at that point it
changes to the acceptstate (however continues perusing
until the finish of the string).
○ On the other hand, on the off chance that the following
image is 0, at that point it changes to the beginning state,
and avoids 0s until it peruses 1 once more.
○ By characterizing the accompanying four expresses, this
cycle will turn out to be clear:
● q1: M is in this state if the last image read was 1, yet the substring
101 has not been perused.
● qsubstring 101 has not been read.10: M is in this state if the last two
images read were 10, yet the • qstring.101: M is in this state if the
substring 101 has been perused in the info
● Q = {q,q1,q10,q101},
● σ = {0,1},
31
The figure underneath gives the state chart of the limited robot M =
(Q,Σ,δ,q,F).
32
demonstrate this, we need to develop a limited machine M with the end
goal that A = L(M). From the start, it appears to be troublesome (or even
inconceivable?) to build such a limited machine: How does the robot
"know" that it has arrived at the third image from the right? It is, in any
case, conceivable to develop such a machine. The fundamental thought is
to recollect the last three images that have been perused. Accordingly, the
limited machine has eight states qijk, where I, j, and k range over the two
components of {0,1}. On the off chance that the machine is in state qijk, at
that point the accompanying hold:
● If M has perused at any rate three images, at that point the three
most as of late read images are ijk.
● If M has perused just two images, at that point these two images are
jk; also, I = 0.
33
2.3 Regular Operations
A ∪ B = {0,01,1,10},
AB = {01,010,011,0110},
also,
A∗ = {ǫ ,0,01,00,001,010,0101,000,0001,00101,...}.
34
As another model, on the off chance that Σ = {0,1}, at that point Σ∗ is the
arrangement of every double string (counting the vacant string). See that
a string consistently has a limited length.
Proof. Since An and B are ordinary dialects, there are limited automata
M1 = (Q1,Σ,δ1,q1,F1) and M2 = (Q2,Σ,δ2,q2,F2) that acknowledge An and
B, individually. To demonstrate that A ∪ B is standard, we need to
develop a limited machine M that acknowledges A ∪ B. At the end of the
day, M must have the property that for each string w ∈ Σ∗ ,
35
● On the other hand, if, subsequent to having understood w, M1 is in
an express that isn't in F1, at that point w ∈ 6 An and M "runs" M2
on w, beginning in the beginning state q2 of M2. On the off chance
that, in the wake of having understood w, M2 is in a territory of F2,
at that point we realize that w ∈ B, in this way w ∈ A ∪ B and,
subsequently, M acknowledges w. Else, we realize that w ∈ 6 A ∪ B,
and M rejects w.
This thought doesn't work, on the grounds that the limited robot M can
peruse the information string w just a single time. The right methodology
is to run M1 and M2 all the while. We characterize the set Q of conditions
of M to be the Cartesian item Q1 × Q2. On the off chance that M is in state
(r1,r2), this implies that
36
δ((r1,r2),a) = (δ1(r1,a),δ2(r2,a)),
i.e.,
The last correspondence infers that (2.1) is valid and, in this way, M in fact
acknowledges the language A ∪ B.
37
string u, M needs to choose whether or not u can be broken into two
strings w and w′ (i.e., compose u as u = ww′), with the end goal that w ∈
An and w′ ∈ B. In words, M needs to choose whether or not u can be
broken into two substrings, with the end goal that the main substring is
acknowledged by M1 and the subsequent substring is acknowledged by
M2. The trouble is brought about by the way that M needs to settle on this
choice by examining the string u just a single time. On the off chance that
u ∈ AB, at that point M needs to choose, during this single output, where
to break up into two substrings. Also, in the event that u 6∈ AB, at that
point M needs to choose, during this single sweep, that u can't be broken
into two substrings with the end goal that the principal substring is in An
and the subsequent substring is in B.
The limited automata that we have seen so far are deterministic. This
implies the accompanying:
Starting now and into the foreseeable future, we will consider such a
limited robot a deterministic limited machine (DFA). In the following
area, we will characterize the thought of a nondeterministic limited
38
machine (NFA). For such a machine, there are at least zero potential states
to change to. From the start sight, nondeterministic limited
You will see three contrasts with the limited automata that we have seen
as of not long ago. To start with, in the event that the robot is in state q1
and peruses the image 1, at that point it has two alternatives: Either it
remains in state q1, or it changes to state q2. Second, in the event that the
robot is in state q2, at that point it can change to state q3 without pursuing
an image; this is shown by the edge having the unfilled string ǫ as a mark.
39
Third, in the event that the robot is in state q3 and peruses the image 0, at
that point it can't proceed.
Let us see what this machine can do when it gets the string 010110 as info.
At first, the machine is in the beginning state q1.
● The second image is 1, and the robot can either remain in state q1 or
change to state q2.
In the event that we proceed along these lines, at that point we see that,
for the information string 010110, there are seven potential computations.
Every one of these computations are given in the figure underneath.
40
Consider the most minimal way in the figure above:
● When perusing the primary image, the robot remains in state q1.
From the figure, you can see that, out of the seven potential computations,
precisely two end in the acknowledged state q4 (after the whole
information string 010110 has been perused). We state that the machine
acknowledges the string 010110, on the grounds that there is at any rate
one computation that closes in the acknowledged state.
41
Presently consider the info string 010. For this situation, there are three
potential computations:
The NFA given above acknowledges all paired strings that contain 101 or
11 as a substring. All other twofold strings are dismissed.
This NFA does the accompanying. On the off chance that it is in the
beginning state q1 and peruses the image 1, at that point it either remains
42
in state q1 or it "surmises" that this image is the third image from the
privilege in the information string. In the last case, the NFA changes to
state q2, and afterward it "checks" that there are in reality precisely two
excess images in the info string. In the event that there are in excess of two
leftover images, at that point the NFA hangs (in state q4) subsequent to
having perused the following two images.
See how this speculating system is utilized: The robot can just peruse the
information string once, from left to right. Consequently, it doesn't have
the foggiest idea when it arrives at the third image from the right. At the
point when the NFA peruses a 1, it can figure that this is the third image
from the right; in the wake of having made this supposition, it confirms
whether the conjecture was right.
In Section 2.2.3, we have seen a DFA for a similar language A. See that the
NFA has a lot more straightforward structure than the DFA.
43
where 0k is the string consisting of a large number. (On the off chance that
k = 0, at that point 0k = ǫ .)
what's more,
A2 = {0k : k ≡ 0 mod 3}.
44
Review the thought of a force set: For any set Q, the force set of Q,
signified by P(Q), is the arrangement of all subsets of Q, i.e.,
P(Q) = {R : R ⊆ Q}.
45
Definition 2.4.2 Let M = (Q,Σ,δ,q,F) be a NFA, and let w ∈ Σ∗ . We state
that M acknowledges w, if w can be composed as w = y1y2 ...ym, where
yi ∈ Σǫ for all I with 1 ≤ I ≤ m, and there exists an arrangement r0,r1,...,rm
of states in Q, to such an extent that
● r0 = q,
● ri+1 ∈ δ(ri,yi+1), for I = 0,1,...,m − 1, and
● rm ∈ F.
The NFA in the model in Section 2.4.1 acknowledges the string 01100. This
can be seen by taking
You may have the feeling that nondeterministic limited automata are
more impressive than deterministic limited automata. In this segment, we
46
will show that this isn't the situation. That is, we will demonstrate that a
language can be acknowledged by a DFA if and just on the off chance that
it very well may be acknowledged by a NFA. To demonstrate this, we will
tell the best way to change over a self-assertive NFA to a DFA that
acknowledges a similar language.
In the remainder of this part, we will tell the best way to change over a
NFA to a DFA:
47
Proof. Review that the NFA N can (by and large) perform more than one
computation on a given info string. The possibility of the proof is to build
a DFA M that runs all these various computations at the same time. (We
have seen this thought as of now in the proof of Theorem 2.3.1.) To be
more exact, the DFA M will have the accompanying property:
Let us see what the progress work δ′ of M does. First see that, since N is a
NFA, δ(r,a) is a subset of Q. This infers that δ′(R,a) is the association of
48
subsets of Q and, along these lines, likewise a subset of Q. Henceforth,
δ′(R,a) is a component of Q′.
The set δ(r,a) is equivalent to the arrangement of all conditions of the NFA
N that can be reached from state r by perusing the image a. We take the
association of these sets δ(r,a), where r ranges over all components of R,
to acquire the new set δ′(R,a). This new set is the express that the DFA M
comes to from state R, by perusing the image a.
Along these lines, we acquire the correspondence that was given in the
start of this proof.
After this heating up, we can think about the overall case. At the end of
the day, starting now and into the foreseeable future, we permit ǫ -
changes in the NFA N. The DFA M is characterized as above, then again,
actually the beginning state q′ and the progress work δ′ must be changed.
Review that a computation of the NFA N comprises the accompanying:
2. Read one "genuine" image of Σ and move to another state (or remain
in the current state).
4. Read one "genuine" image of Σ and move to another state (or remain
in the current state).
6. Etc.
49
reenactment is certainly encoded in the meaning of the beginning state q′
of M.
● Etc.
In this manner, in one stage, the DFA M recreates the perusing of one
"genuine" image of Σ, trailed by making at least zero ǫ -changes.
r∈ R
50
To sum up, the NFA N = (Q,Σ,δ,q,F) is changed over to the DFA M = (Q′,
Σ, δ′, q′, F′), where
● Q′ = P(Q),
● q′ = Cǫ ({q}),
● F ′ = {R ∈ Q′ : R ∩ F 6= ∅ },
● δ′ : Q′ × Σ → Q′ is characterized as follows: For every R ∈ Q′ and
for each a ∈ Σ,
2.5.1 An Example
51
We will tell the best way to change this NFA N over to a DFA M that
acknowledges a similar language. Following the proof of Theorem 2.5.1,
the DFA M is indicated by ), where every one of the segments is
characterized underneath.
● Q′ = P(Q). Henceforth,
Q′ = {∅ ,{1},{2},{3},{1,2},{1,3},{2,3},{1,2,3}}.
F ′ = {{2},{1,2},{2,3},{1,2,3}}.
52
In this model δ′ is given by
53
● The state {2} has just a single approaching edge; it originates from
the state {reached from the beginning state.3}. Since {3} can't be
reached from the beginning state, {2} can't be
Consequently, we can eliminate the four states {1}, {2}, {3}, and {1,3}. The
subsequent DFA acknowledges a similar language as the DFA above. This
prompts the accompanying state outline, which portrays a DFA that
acknowledges a similar language as the NFA N:
54
segment, we will see that the idea of NFA, along with Theorem 2.5.2, can
be utilized to give a basic proof of the way that the customary dialects are
to be sure shut under the normal activities. We start by giving an elective
proof of Theorem 2.3.1:
3. F = F1 ∪ F2.
In the last Theorem of this segment, we notice (without proof) two more
conclusion properties of the ordinary dialects:
55
Theorem 2.6.3 The arrangement of ordinary dialects is shut under the
supplement and crossing point tasks:
A = {w ∈ Σ∗ : w 6∈ A}
A1 ∩ A2 = {w ∈ Σ∗ : w ∈ A1 and w ∈ A2}
56
That is, the language depicted by this articulation is
{00,001,0011,00111,...,10,101,1011,10111,...}.
Here are some more models (in all cases, the letter set is {0,1}):
57
Definition 2.7.1 Let Σ be a non-void letters in order.
1. ǫ is a normal articulation.
2. ∅ is a normal articulation.
You can respect 1., 2., and 3. similar to the "building blocks" of normal
articulations. Things 4., 5., and 6. give decides that can be utilized to join
ordinary articulations into new (and "bigger") standard articulations. To
give a model, we guarantee that
(0 ∪ 1)∗ 101(0 ∪ 1)∗
58
● Since 0∪1 is a customary articulation, by 6., (0∪1)∗ is an ordinary
articulation. • Since 1 and 0 are normal articulations, by 5., 10 is a
customary articulation.
59
6. Let R be a standard articulation and left L alone the language
portrayed by it. The ordinary articulation R∗ depicts the language
L∗ .
Consequently, despite the fact that (0∪ǫ )1∗ and 01∗ ∪1∗ are diverse
standard articulations, we compose
(0 ∪ ǫ )1∗ = 01∗ ∪ 1∗ ,
60
In Section 2.8.2, we will show that each standard language can be depicted
by a customary articulation. The proof of this reality is absolutely
logarithmic and utilizes the accompanying arithmetical personalities
including ordinary articulations.
1. R1∅ = ∅ R1 = ∅ .
2. R1ǫ = ǫ R1 = R1.
3. R1 ∪ ∅ = ∅ ∪ R1 = R1.
4. R1 ∪ R1 = R1.
5. R1 ∪ R2 = R2 ∪ R1.
6. R1(R2 ∪ R3) = R1R2 ∪ R1R3.
7. (R1 ∪ R2)R3 = R1R3 ∪ R2R3.
8. R1(R1R3) = (R1R2)R3.
9. ∅ ∗ = ǫ .
10. ǫ ∗ = ǫ .
11. (ǫ ∪ R1)∗ = R1∗ .
12. (ǫ ∪ R1)(ǫ ∪ R1)∗ = R1∗ .
13. R1∗ (ǫ ∪ R1) = (ǫ ∪ R1)R1∗ = R1∗ .
14. R1∗ R2 ∪ R2 = R1∗ R2.
15. R1(R2R1)∗ = (R1R2)∗ R1.
16. (R1 ∪ R2)∗ = (R1∗ R2)∗ R1∗ = (R2∗ R1)∗ R2∗ .
(0 ∪ ǫ )1∗ = 01∗ ∪ 1∗ .
61
We can confirm this personality in the accompanying manner:
62
by acceptance in transit R is "constructed" utilizing the "rules" given in
Definition 2.7.1).
The first base case: Assume that R = ǫ . At that point R portrays the
language {ǫ }. To demonstrate that this language is normal, it does the
trick, byM = (Q,Σ,δ,q,F) that acknowledges this Theorem 2.5.2, to build a
NFA language. This NFA is gotten by defining= {q}, andM: δ(q,a) = ∅ for
every one of the a ∈ Σǫ . The figure beneath gives the stateQ = {q}, q is the
beginning state, outline of M:
The second base case: Assume that R = ∅ . At that point R portrays the
language ∅ to develop a NFA. To demonstrate that this language is
normal, it gets the job done, by Theorem 2.5.2,M = (Q,Σ,δ,q,F) that
acknowledges this language. This NFA is acquired by characterizing Q =
{q}, q is the beginning state, F = ∅ , and δ(q,a) = ∅ for every one of the a ∈
Σǫ . The figure beneath gives the state chart of M:
The third base case: Let a ∈ Σ and expect that R = a. At that point R
portrays the language {a}. To demonstrate that this language is
customary, it suffices,M = (Q,Σ,δ,q1,F) that acknowledges by Theorem
2.5.2, to build a NFA this language. This NFA is gotten by characterizing
Q = {q1,q2}, q1 is the beginning state, F = {q2}, and
63
The main instance of the enlistment step: Assume that R = R1 ∪ R2,
where R1 and R2 are customary articulations. Let L1 and L2 be the dialects
depicted by R1 and R2, individually, and expect that L1 and L2 are
ordinary. At that point R depicts the language L1 ∪ L2, which, by
Theorem 2.6.1, is standard.
The third instance of the acceptance step: Assume that R = (R1)∗ , where
R1 is a normal articulation. Let L1 be the language depicted by R1 and
expect that L1 is ordinary. At that point R portrays the language (L1)∗ ,
which, by Theorem 2.6.3, is customary.
This finishes up the proof of the case that each customary articulation
depicts a normal language.
where the letters in order is {a,b}. We will demonstrate that this normal
articulation portrays an ordinary language, by building a NFA that
acknowledges the language depicted by this standard articulation. See
how the customary articulation is "constructed":
● Take the customary articulations an and b, and join them into the
normal articulation abdominal muscle.
64
● Take the customary articulation abdominal muscle ∪ a, and change
it into the normal articulation (abdominal muscle ∪ a)∗ .
65
2.8.2 Converting a DFA to a standard articulation
1. u is a component of C.
66
symbols, of the string v.) Since v is a string in L, which is equal to
BL ∪ C, v is a string in BL ∪ C. Hence, there are two possibilities for
v.
(b) v is a component of BL. For this situation, there are strings′w. Since
ǫ ∈ 6 B, we haveL, which is equivalent tobb′ ′ =6 ∈ Bǫ and,and w ∈
L such that w| < |vv|. Since= BLb ∪wCis a string in. Subsequently,
there are two prospects accordingly, | BL∪C, w is a string in for w.
This cycle ideally persuades you that any string u in L can be composed
as the link of at least zero strings in B, trailed by one string in C. Indeed,
L comprises of precisely those strings having this property: string in L,
which is comparable to BL ∪ C, v is a string in BL ∪ C. In this way, there
are two open doors for v.
67
Lemma 2.8.2 Let Σ be a letters in order, and let B, C, and L be dialects in
Σ∗ with the end goal that ǫ 6∈ B and L = BL ∪ C.
Then
L = B∗ C.
Proof. To start with, we show that B∗ C ⊆strings ofL. Let uBbe a self-
assertive string in, for some k ≥ 0, followed byB∗ C. At that point u is the
connection of k one line of C. We continue by enlistment on k.
68
event one; thus, the length of v is not exactly the length of u. By enlistment,
v is a string in B∗ C. Subsequently, u = bv, where b ∈ B and v ∈ B∗ C. This
shows that u ∈ B(B∗ C). Since B(B∗ C) ⊆ B∗ C, it follows that u ∈ B∗ C.
Note that Lemma 2.8.2 holds for any language B that doesn't contain the
vacant string ǫ . For instance, expect that B = ∅ . At that point the language
L fulfills the condition
L = BL ∪ C = ∅ L ∪ C.
The conversion
We will presently utilize Lemma 2.8.2 to demonstrate that each DFA can
be changed over to a normal articulation.
69
We will show that each such language Lr can be portrayed by a standard
articulation. Since L(M) = Lq, this will demonstrate that L(M) can be
depicted by a normal articulation.
The essential thought is to set up conditions for the dialects Lr, which we
at that point explain utilizing Lemma 2.8.2. We guarantee that on the off
chance that
For what reason is this valid? Leave w alone a string in Lr. At that point
the way P in the state chart of M that begins in state r and that relates to
w closes in a territory of F. Since r 6∈ F, this way contains in any event
one edge. Let r′ be the express that follows the main state (i.e., r) of P. At
that point r′ = δ(r,b) for some image b ∈ Σ. Thus, b is the main image of
w. Compose w = bv, where v is the excess piece of w. At that point the
way P ′ = P \ {r} in the state graph of M that begins in state r′ and that
compares to v closes in a territory of F.
Along these lines, v ∈ Lr′ = Lδ(r,b). Subsequently,
70
So we currently have a bunch of conditions in the "questions" Lr, for r ∈
Q. The quantity of conditions is equivalent to the size of Q. All in all, the
quantity of conditions is equivalent to the quantity of questions. The
normal articulation for L(M) = Lq is gotten by understanding these
conditions utilizing Lemma 2.8.2.
Obviously, we need to persuade ourselves that these conditions have an
answer for some random DFA. Before we manage this issue, we give a
model.
An Example
For this case, (2.2) and (2.3) give the accompanying conditions:
71
In the third condition, Lq2 is communicated regarding Lq0 and Lq1. Thus,
on the off chance that we substitute the third condition into the first, and
use Theorem 2.7.4, at that point we get
In the event that we substitute Lq1 into the principal condition, at that
point we get (again utilizing Theorem 2.7.4)
72
Lemma 2.8.3 Let n ≥ 1 be a whole number and, for 1 ≤ I ≤ n and 1 ≤ j ≤ n,
left Bij and Ci alone standard articulations with the end goal that ǫ 6∈ Bij.
Let L1,L2,...,Ln be dialects that fulfill
Since ǫ ∈ B11, it follows from Lemma 2.8.2 that . This demonstrates the
base case.
By subbing this condition for Ln into the conditions for Li, 1 ≤ I ≤ n−1,
we get
73
In this way, we have acquired n − 1 conditions in L1,L2,...,Ln−1. LSince1
can beǫ 6∈ , it follows from the acceptance speculation that
communicated as a normal articulation just including the customary
articulations Bij and Ci.
In the past areas, we have seen that the class of normal dialects is shut
under different activities, and that these dialects can be depicted by
(deterministic or nondeterministic) limited automata and standard
articulations. These properties helped in creating strategies for indicating
that a language is standard. In this part, we will introduce a device that
can be utilized to demonstrate that specific dialects are not ordinary. See
that for a normal language,
74
This language ought to be nonregular, in light of the fact that it appears
to be impossible that a DFA can recall the number of 0s it has seen when
it has arrived at the fringe between the 0s and the 1s. Likewise the
language
{0n : n is a prime number}
ought to be non regular, on the grounds that the prime numbers don't
appear to have any dull structure that can be utilized by a DFA. To be
more thorough about this, we will set up a property that all ordinary
dialects must have. This property is known as the siphoning lemma. In
the event that a language doesn't have this property, at that point it must
be non regular.
75
Proof. Let Σ be the letter set of A. Since A will be a customary language,
there exists a DFA M = (Q,Σ,δ,q,F) that acknowledges A. We characterize
p to be the quantity of states in Q.
Let s = s1s2 ...sn be a discretionary string in A with the end goal that n ≥
p. Characterize r1 = q, r2 = δ(r1,s1), r3 = δ(r2,s2), ..., rn+1 = δ(rn,sn). Hence,
when the DFA M peruses the string s from left to right, it visits the states
r1,r2,...,rn+1. Since s is a string in A, we realize that rn+1 has a place with
F.
76
any number I ≥ 0 of times, and the comparing string xy iz will at present
be acknowledged by M. It follows that xyiz ∈ A for all I ≥ 0.
First example
Think about the language
A = {0n1n : n ≥ 0}.
Second example
77
stringAssume that|s| = 2p ≥ pA. By the siphoning lemma,is a standard
language. Leti sp ≥can be composed as1 be the siphoning length,s =
0p1p. Then s =sxyz∈ A, where y 6= ǫ , |xy| ≤ p, and xy z ∈ A for all I ≥
0.
Third example
78
Fourth example
Fifth example
See that
|s| = |xyz| = p2
79
also,
|xy2z| = |xyyz| = |xyz| + |y| = p2 + |y|.
Sixth Example
80
which is certainly not a prime number, since n ≥ 2 and 1 + k ≥ 2. This is a
logical inconsistency and, along these lines, An is anything but a standard
language.
Seventh example
Since this language has a similar flavor as the one in the subsequent
model, we may presume that An is definitely not an ordinary language.
This is, be that as it may, false: As we will show, A will be a standard
language.
This property holds, on the grounds that between any two back to back
events of 01, there must be actually one event of 10. Additionally, between
any two sequential events of 10, there must be actually one event of 01.
● q01: the last image read was 1; in the piece of the string read up until
this point, the quantity of events of 01 is one more than the quantity
of events of 10.
81
● q10: the last image read was 0; in the portion of the string read up
until this point, the quantity of events of 10 is one more than the
quantity of events of 01.
● qequal0 : the last image read was 0; in the piece of the string read
up until now, the quantity of events of 01 is equivalent to the
quantity of events of 10.
● qequal1 : the last image read was 1; in the portion of the string read
up until this point, the quantity of events of 01 is equivalent to the
quantity of events of 10.
Indeed, the key property referenced above infers that the language A
comprises of the vacant string ǫ and all non-void double strings that start
and end with a similar image. Therefore, An is the language depicted by
the normal articulation
ǫ ∪ 0 ∪ 1 ∪ 0(0 ∪ 1)∗ 0 ∪ 1(0 ∪ 1)∗ 1.
This gives an elective proof for the way that A will be a customary
language.
Eighth example
82
L = {w ∈ {0,1}∗ : w is the double portrayal of a prime number}.
We accept that for any certain whole number, the furthest left piece in its
paired portrayal is 1. All in all, we expect that there are no 0's additional
to one side of such a parallel portrayal. Hence,
L = {10,11,101,111,1011,1101,10001,...}.
1. |y| ≥ 1,
Let Σ be a limited letter set. For any two strings x and y in Σ∗ , we state
that x is an aftereffect of y, if x can be gotten by erasing at least zero images
from y. For instance, 10110 is an aftereffect of 0010010101010001. For any
language L ⊆ Σ∗ , we characterize
83
SUBSEQ(L) := {x : there exists a y ∈ L with the end goal that x is an
aftereffect of y}.
Theorem 2.10.1 (Higman) For any limited letters in order Σ and for any
language L ⊆ Σ∗ , the language SUBSEQ(L) is standard.
Theorem 2.10.2 (Dickson) Let S ⊆ Nn, and let M be the set comprising of
all components of S that are insignificant in the connection "is
overwhelmed by". Subsequently,
M = {q ∈ S : there is no p in S \ {q} with the end goal that p is
overwhelmed by q}.
84
event that p ∈ M \ {q}, at that point q isn't overwhelmed by p. Along these
lines, there exists a file I with the end goal that pi ≤ qi − 1. It follows that
Since the set Sik is essentially a subset of Nn−1, it follows from the
acceptance theory that Sik contains limitedly numerous insignificant
components. This, joined with Lemma 2.10.3, suggests that Mik is a
limited set. Consequently, by (2.7), M \ {q} is the association of limitedly
numerous limited sets. Thusly, the set M is limited.
We give the proof of Theorem 2.10.1 for the situation when Σ = {0,1}. On
the off chance that L = ∅ or SUBSEQ(L) = {0,1}∗ , at that point SUBSEQ(L)
is clearly a customary language.
85
Consequently, we may expect that L is non-void and SUBSEQ(L) is an
appropriate subset of {0,1}∗ .
{y ∈ Abk : x is an aftereffect of y}
is normal.
Proof. We will demonstrate the case by methods for a model. Accept that
b = 1, k = 3, and x = 11110001000. At that point, the language
{y ∈ Abk : x is an aftereffect of y}
This ought to persuade you that the case is valid all in all.
Lemma 2.10.5 For every b ∈ {0,1} and every 0 ≤ k ≤ n, the set Mbk is
limited.
Proof. Once more, we will demonstrate the case by methods for a model.
Accept that b = 1 and k = 3. Any string in Fbk can be composed as
86
1a0b1c0d, for certain whole numbers a,b,c,d ≥ 1. Think about the capacity
ϕ : Fbk → N4 that is characterized by ϕ(1a0b1c0d) = (a,b,c,d). At that
point, ϕ is an injective capacity, and coming up next is valid, for any two
strings x and x′ in Fbk:
● It follows from (2.9) and Lemmas 2.10.7 and 2.10.8, that Fbk is the
association of limitedly numerous standard dialects. In this way, by
Theorem 2.3.1, Fbk is a normal language.
87
Chapter 3
Context-Free Languages
Here, S, A, and B are factors, S is the beginning variable, and an and b are
terminals. We utilize these principles to infer strings comprising of
terminals (i.e., components of {a,b}∗ ), in the accompanying way:
88
2. Take any factor in the current string and take any standard that has
this variable on the left-hand side. At that point, in the current
string, supplant this variable by the right-hand side of the standard.
89
The five standards in this model establish a context-free language
structure. The language of this syntax is the arrangement of all strings that
since each line of the structure ambn, for some m ≥ 1 and n ≥ 1alphabet,
can be gotten from the beginning variable, while no other string over the
{a,b} can be gotten from the beginning variable.
3. V ∩ Σ = ∅ ,
90
Definition 3.1.2 Let G = (V,Σ,R,S) be a context-free language structure.
Let A be is a standard inan component inR. We state that the stringV and
let u, v, and w be strings in (uwv can be derived in one stage from∪σ)∗
with the end goal that A → w the string uAv, and compose this as
uAv ⇒ uwv.
1. u = v or
(a) u = u1,
At the end of the day, by beginning with the string u and applying rules
at least multiple times, we acquire the string v. In our model, we see that
aaAbB ⇒∗ aaaabbbB.
91
Definition 3.1.5 A language L is called context-free, if there exists a
context free punctuation G with the end goal that L(G) = L.
92
● empty (which we get from S by the standard S → ǫ ),
93
It isn't hard to see that these are the main strings that can be gotten from
the beginning variable S1. Subsequently, L(G1) = L1.
has the property that L(G2) = L2, where L2 = {1n0n : n ≥ 0}. In this way,
L2 is a context-free language.
Characterize L = L1 ∪ L2, i.e.,
L = {0n1n : n ≥ 0} ∪ {1n0n : n ≥ 0}.
94
2. w = 0m1n, for certain numbers m and n with 0 ≤ n < m, or
3. w contains 10 as a substring.
● the string 1,
95
we can determine, from S3, all strings of type 3.
furthermore,
96
Let us perceive how we can infer all strings in L from the beginning
variable S:
2. Given a line of the structure ancn, we begin adding bs. Each time
we add a b, we additionally add a c. See that each b must be added
between the as and the cs. Hence, we utilize a variable B as a
"pointer" to the situation in the current string where a b can be
added: Instead of getting ancn from S, we determine the string
anBcn. At that point, from B, we determine all strings of the
structure bmcm, where m ≥ 0.
infer that the accompanying strings can be gotten from the beginning
variable S:
97
S ⇒A ⇒B ⇒ǫ ,
At last, see that we needn't bother with S; all things being equal, we can
utilize An as start variable. This gives our last context-free language
structure G′′ = (V,Σ,R′′,A), where V = {A,B}, Σ = {a,b,c}, and R′′ comprises
of the guidelines
A → aAc|B
B → ǫ |bBc
98
which can be reformulated as M acknowledges w if and just if S ⇒∗ w.
In the following stage, M peruses the image wi+1 and changes from
express A to, state, state B; along these lines, δ(A,wi+1) = B. To ensure that
the above correspondence actually holds, we need to add the standard A
→ wi+1B to G.
Consider the second when M has perused the whole string w. Let A be
the state M is in at that point. By the above correspondence, we have
Review that G must have the property that M acknowledges w if and just
if S ⇒∗ w, which is identical to
A ∈ F if and just if S ⇒∗ w.
99
● R comprises of the principles
what's more,
A → ǫ , where A ∈ F.
In words,
● r0 = q, and
100
S = q = r0 ⇒ w1r1 ⇒ w1w2r2 ⇒ ... ⇒ w1w2 ...wnrn ⇒ w1w2 ...wn = w.
In Sections 2.9.1 and 3.2.2, we have seen that the language {0n1n : n ≥ 0}
isn't ordinary, yet context-free. In this manner, the class of all context-free
languages appropriately contains the class of standard languages.
3.3.1 An model
101
Consider the string 010011011, which is a component of L. At the point
when the limited robot M peruses this string, it visits the states
S,S,A,B,S,A,A,B,C,C.
Thus,
S ⇒∗ 010011011,
suggesting that the string 010011011 is in the language L(G) of the context-
free sentence structure G.
The string 10011 isn't in the language L. At the point when the limited
machine M peruses this string, it visits the states
S,A,B,S,A,A,
102
i.e., after the string has been perused, M is in the non-acknowledge
express A. In the syntax G, perusing the string 10011 relates to the
inference
103
3. S → ǫ , where S is the beginning variable.
Stage 1: Eliminate the beginning variable from the right-hand side of the
standards.
● L(G1) = L(G).
104
2. For each standard in the current set R1 that is of the structure
(b) B → uAv (where u and v are strings that are not both void), add the
standard B → uv to R1; see that thusly, we supplant the two-venture
inference B ⇒ uAv ⇒ uv by the one-venture determination
B ⇒ uv;
We rehash this cycle until all ǫ -rules have been dispensed with. Let R2 be
the arrangement of rules, all things considered ǫ -rules have been killed.
We characterize G2 = (V2,Σ,R2,S2), where V2 = V1 and S2 = S1. This
punctuation has the property that
105
We consider all unit-rules, in a steady progression. Let A → B be one such
principle, where An and B are components of V2. We realize that B =6 S2.
We adjust G2 as follows:
We rehash this cycle until all unit-rules have been dispensed with. Let R3
be the arrangement of rules, after all unit-rules have been wiped out. We
characterize
106
For each standard in the current set R3 that is of the structure A → u1u2
...uk, where k ≥ 3 and each ui is a component of V3 ∪ Σ, we adjust G3 as
follows:
1. Remove the standard A → u1u2 ...uk from the current set R3.
where A1,A2,...,Ak−2 are new factors that are added to the current set V3.
107
● L(G4) = L(G3) = L(G2) = L(G1) = L(G).
For each standard in the current set R4 that is of the structure A → u1u2,
where u1 and u2 are components of V4 ∪ Σ, yet u1 and u2 are not both
contained in V4, we adjust G3 as follows:
108
U1U1 and U1 → u1, where U1 is another variable that is added to
the current set V4.
3.4.1 An example
109
Consider the context-free language G = (V,Σ,R,A), where V = {A,B}, Σ =
{0,1}, An is the beginning variable, and R comprises of the standards
A → BAB|B|ǫ
B → 00|ǫ
Stage 1: Eliminate the beginning variable from the right-hand side of the
principles.
We present another beginning variable S, and add the standard S → A.
This gives the accompanying language:
We take the ǫ -rule A → ǫ , and eliminate it. At that point we consider all
principles that contain An on the right-hand side. There are two such
guidelines:
110
We take the ǫ -rule B → ǫ , and eliminate it. At that point we consider all
principles that contain B on the right-hand side. There are three such
principles:
Since all ǫ -rules have been killed, this finishes Step 2. (See that the
standard S → ǫ is permitted, on the grounds that S is the beginning
variable.)
111
We take the unit-rule S → B, remove it, and add the rule S → 00. This
gives the following grammar:
S → ǫ |BAB|BB|AB|BA|00
A → BAB|B|BB|AB|BA
B → 00
We take the unit-rule A → B, remove it, and add the rule A → 00. This
gives the following grammar:
S → ǫ |BAB|BB|AB|BA|00
A → BAB|BB|AB|BA|00
B → 00
Since all unit-rules have been wiped out, this closes Step 3.
● We take the standard S → BAB, eliminate it, and add the guidelines
S → BA1 and A1 → AB.
● andWe take the ruleA2 → AB. A → BAB, eliminate it, and add
the guidelines A → BA2
S → ǫ |BB|AB|BA|00|BA1
A → BB|AB|BA|00|BA2
B → 00
A1 → AB
A2 → AB
112
● We supplant the standard S → 00 by the principles S → A3A3 and
A3 → 0.
113
● that isn't contained in Σ; this image is known as the clear
image. On the off chance that a cell contains
● at that point this implies that the cell is really vacant.
2. There is a tape head which can move along the tape, one cell to the
privilege per move. This tape head can likewise peruse the cell it as
of now examines.
4. There is a stack head which can peruse the top image of the stack.
This head can likewise pop the top image, and it can push images
of Γ onto the stack.
1. Assume that the pushdown robot is right now in state r. Let a be the
image of Σ that is perused by the tape head, and let A be the image
of Γ that is on top of the stack.
2. Depending on the present status r, the tape image a, and the stack
image A,
114
(a) the pushdown machine changes to a state r′ of Q (which might be
equivalent to r),
(b) the tape head either moves one cell to one side or remains at the
current cell, and
(c) the top image An is supplanted by a string w that has a place with
Γ∗ . To be more exact,
115
1. σ is a limited set, called the tape letters in order; the clear image ✷
isn't contained in Σ,
2. γ is a limited set, called the stack letters in order; this letters in order
contains the uncommon image $,
δ : Q × (Σ ∪ {✷}) × Γ → Q × {N,R} × Γ∗ .
● the tape head moves concurring toto the right; in the event that σ =
N, at that point it doesn't move, andσ: in the event that σ = R, at that
point it moves one cell
116
We will compose the computation step (3.1) as the guidance
raA → r′σw.
Start design: Initially, the pushdown robot is in the beginning state q, the
tape head is on the furthest left image of the info string a1a2 ...an, and the
stack just contains the extraordinary image $.
2. at the hour of end (i.e., exactly when the stack getsempty), the tape
head is on the cell promptly to one side of the cell containing the
image an (this cell must contain the clear image ✷).
In every other case, the pushdown machine dismisses the info string.
Consequently, the pushdown machine dismisses this string if
2. at the hour of end, the tape head isn't on the cell immediatelyto the
privilege of the cell containing the image an.
117
We signify by L(M) the language acknowledged by the pushdown
machine
M. Accordingly,
L(M) = {w ∈ Σ∗ : M acknowledges w}.
We will tell the best way to build a deterministic pushdown robot, that
acknowledges the arrangement of all strings of appropriately settled
brackets. See that a string w in {(,)}∗ is appropriately settled if and just if
118
● in the complete string w, the number of “(” is equal to the number
of “)”.
We will use the tape symbol a for “(”, and the tape symbol b for “)”.
The thought is as per the following. Review that at first, the stack just
contains the unique image $. The pushdown machine peruses the
information string from left to right. For each an it peruses, it pushes the
image S onto the stack, and for each b it peruses, it pops the top image
from the stack. Along these lines, the quantity of images S on the stack
will consistently be equivalent to the quantity of as that have been
perused less the quantity of bs that have been perused; furthermore, the
lower part of the stack will contain the exceptional image $. The
information string is appropriately settled if and just in the event that (I)
this distinction is consistently non-negative and (ii) this distinction is zero
once the whole information string has been perused. Subsequently, the
information string is acknowledged whether and just if during this cycle,
(I) the stack consistently contains at any rate the extraordinary image $
and (ii) toward the end, the stack just contains the exceptional image $
(which will at that point be flown in the last advance).
119
3.6.2 Strings of the structure 0n1n
The machine utilizes two states q0 and q1, where q0 is the beginning state.
At first, the robot is in state q0.
● For every 0 that it peruses, the machine pushes one image S onto the
stack and remains in state q0.
● When the initial 1 is perused, the machine changes to state q1. From
that second,
● for every 1 that is perused, the machine pops the top image
from the stack and remains in state q1;
120
3.6.3 Strings with b in the center
The thought is as per the following. The robot utilizes two states q and q′,
where q is the beginning state. These states have the accompanying
importance:
● If the machine is in state q′, at that point it has perused the center
image b.
See that since the robot can just make one single disregard the
information string, it needs to "surmise" (i.e., use nondeterminism)
when it arrives at the center of the string.
121
● If the machine is in state symbol, q, at that point, when perusing
the current tape
● symbol, it pops the top symbolIf the machine is in state q′, at that
point, when perusing the current tapeS from the stack and remains
in state q′.
The info string is acknowledged whether and just if, right when the clear
image ✷ is perused, the robot is in state q′ and the top image on the stack
is $. For this situation, the stack is made void and, along these lines, the
computation ends.
122
Comment 3.6.1 It can be indicated that there is no deterministic
pushdown robot that acknowledges the language L. The explanation is
that a deterministic pushdown machine can't decide when it arrives at the
center of the info string. In this manner, dissimilar to with respect to
limited automata, nondeterministic pushdown automata are more
remarkable than their deterministic partners.
123
We will just demonstrate one heading of this Theorem. That is, we will
tell the best way to change over a subjective context-free language
structure to a nondeterministic pushdown robot.
3. $ → ǫ .
in the event that and just if there exists a computation of M that begins in
the underlying design
124
where ∅ shows that the stack is unfilled.
Expect that $ ⇒∗ a1a2 ...an. At that point there exists a deduction (utilizing
the principles of R) of the string a1a2 ...a from the beginning variable $.
We may expect that in each progression in this deduction, a standard is
applied to the furthest left factor in the current string. Thus, in light of the
fact that the language G is in Chomsky ordinary structure, at any second
during the induction, the current string has the structure
125
● the tape letters in order is the set Σ of terminals of G,
126
Proof. Since An is context-free, there exists a context-free punctuation G0
with the end goal that L(G0) = A. By Theorem 3.4.2, there exists a context-
free sentence structure G that is in Chomsky ordinary structure and for
which L(G) = L(G0). The development given above proselytes G to a
nondeterministic pushdown robot M that has just one state and for which
L(M) = L(G).
2. |vxy| ≤ p, and
127
3.8.1 Proof of the siphoning lemma
The proof of the siphoning lemma will utilize the accompanying outcome
about parse trees:
Presently we can begin with the proof of the siphoning lemma. Leave L
alone a context-free language and let Σ be the letter set of L. By Theorem
3.4.2, there exists a context-free punctuation in Chomsky ordinary
structure, G = (V,Σ,R,S), with the end goal that L = L(G).
128
By consolidating these imbalances, we see that 2r ≤ 2ℓ−1, which can be
reworked as
ℓ ≥ r + 1.
129
Review that T is a parse tree for the string s. Consequently, the terminals
put away at the leaves of T, in the request from left to right, structure s.
As demonstrated in the figure over, the hubs putting away the factors Aj
and Ak parcel s into five substrings u, v, x, y, and z, with the end goal that
s = uvxyz.
This demonstrates that the third property in the siphoning lemma holds.
Next we show that the subsequent property holds. That is, we
demonstrate that |vxy| ≤ p. Consider the subtree established at the hub
putting away the variable Aj. The way from the hub putting away Aj to
the leaf putting away the terminal a will be a longest way in this subtree.
130
(Persuade yourself that this is valid.) Moreover, this way comprises of ℓ −
j edges. Since Aj ⇒∗ vxy, this subtree is a parse tree for the string vxy
(where Aj is utilized as the beginning variable). Accordingly, by Lemma
3.8.2, we can infer that |vxy| ≤ 2ℓ−j−1. We realize that ℓ − 1 − r ≤ j, which
is identical to ℓ − j − 1 ≤ r. It follows that
|vxy| ≤ 2ℓ−j−1 ≤ 2r = p.
Let the primary principle utilized in this inference be Aj → BC. (Since the
factors Aj and Ak, despite the fact that they are equivalent, are put away
at various hubs of the parse tree, and since the language G is in Chomsky
typical structure, this first standard exists.) Then
Aj ⇒ BC ⇒∗ vAky.
See that the string BC has length two. Also, by applying rules of a syntax
in Chomsky ordinary structure, strings can't get more limited. (Here, we
utilize the way that the beginning variable doesn't happen on the right-
hand side of any standard.) Therefore, we have |vAky| ≥ 2. In any case,
this suggests that |vy| ≥ 1. This finishes the proof of the siphoning
lemma.
First example
131
We will demonstrate by inconsistency that An isn't a context-free
language.
See that the siphoning lemma doesn't reveal to us the area of the substring
vxy in the string s, it just gives us an upper bound on the length of this
substring. Thusly, we need to think about three cases, contingent upon
the area of vxy in s.
Consider the string uv2xy2z = uvvxyyz. Since |vy| ≥ 1, this string contains
more than p numerous as or more than p numerous bs. Since it contains
precisely p numerous cs, it follows that this string isn't in the language A.
This is a logical inconsistency in light of the fact that, by the siphoning
lemma, the string uv2xy2z is in A.
Consider the string uv2xy2z = uvvxyyz. Since |vy| ≥ 1, this string contains
more than p numerous bs or more than p numerous cs. Since it contains
precisely p numerous as, it follows that this string isn't in the language A.
This is a logical inconsistency in light of the fact that, by the siphoning
lemma, the string uv2xy2z is in A.
Case 3: The substring vxy contains at any rate one an and in any event
one c.
Since s = apbpcp, this infers that |vxy| > p, which again repudiates the
siphoning lemma.
132
Subsequently, in the entirety of the three cases, we have acquired a logical
inconsistency. Along these lines, we have demonstrated that the language
An isn't context-free.
Second example
It is anything but difficult to see that the language of this syntax is actually
the language A. Along these lines, An is context-free. On the other hand,
we can show that An is context free, by developing a (nondeterministic)
pushdown machine that acknowledges A. This robot has two states q and
q′, where q is the beginning state. In the event that the robot is in state q,
at that point it didn't yet wrap up perusing the furthest left 50% of the
information string; it pushes all images read onto the stack. In the event
that the robot is in state q′, at that point it is perusing the furthest right
50% of the information string; for every image read, it checks whether it
is equivalent to the image on top of the stack and, assuming this is the
case, pops the top image from the stack. The pushdown machine utilizes
nondeterminism to "surmise" when to change from state q to state q′ (i.e.,
when it has finished perusing the furthest left 50% of the information
string).
133
Now, you ought to persuade yourself that the two methodologies above,
which demonstrated that An is context-free, don't work for B. The
motivation behind why they don't work is that the language B isn't
context-free, as we will demonstrate now.
Case 1: The substring vxy covers both the furthest left half and the furthest
right 50% of s.
Since |vxy| ≤ p, the substring vxyp pis contained in the "center" part of
s, i.e., vxy is contained in the square b a . Consider the string uv0xy0z =
uxz. Since |vy| ≥ 1, we realize that at any rate one of v and y is non-void.
134
right square of bs in s. Accordingly, in the string uxz, the furthest
left square of bs contains less bs than the furthest right square of bs.
Subsequently, the string uxz isn't contained in B.
In the two cases, we presume that the string uxz isn't a component of the
language B. Yet, by the siphoning lemma, this string is contained in B.
For this situation, none of the strings uxz, uv2xy2z, uv3xy3z, uv4xy4z,
and so on, is contained in B. However, by the siphoning lemma, every one
of these strings is contained in B.
To sum up, in every one of the three cases, we have acquired a logical
inconsistency. In this way, the language B isn't context-free.
Third example
135
All in all, context-free language structures can check expansion, while
limited automata are not amazing enough for this. We presently think
about the issue of checking increase: Let A be the language characterized
as
A = {ambncnm: m ≥ 0,n ≥ 0}.
There are three potential cases, contingent upon the areas of v and y in
the string s.
Case 1: The substring v doesn't contain any an and doesn't contain any b,
and the substring y doesn't contain any an and doesn't contain any b.
Case 2: The substring v doesn't contain any c and the substring y doesn't
contain any c.
136
● the number of as is at any rate p and the quantity of bs is at any rate
p + 1.
Case 3: The substring v contains at any rate one b and the substring y
contains at any rate one c.
137
Chapter 4
In the past sections, we have seen a few computational gadgets that can
be utilized to acknowledge or produce ordinary and setting free dialects.
Despite the fact that these two classes of dialects are genuinely huge, we
have found in Section 3.8.2 that these gadgets are not incredible enough
to acknowledge basic dialects, for example, A = {ambncmn : m ≥ 0,n ≥ 0}.
In this section, we present the Turing machine, which is a basic model of
a genuine PC. Turing machines can be utilized to acknowledge all setting
free dialects, yet additionally dialects, for example, A. We will contend
that each difficult that can be explained on a genuine PC can likewise be
comprehended by a Turing machine (this assertion is known as the
Church-Turing Thesis). In Chapter 5, we will think about the restrictions
of Turing machines and, thus, of genuine PCs.
1. There are k tapes, for some fixed k ≥ 1. Each tape is partitioned into
cells, and is endless both to one side and to one side. Every phone
stores an image having a place with a limited set Γ, which is known
as the tape letters in order. The tape letter set contains the clear
image ✷. On the off chance that a cell contains ✷, at that point this
implies that the cell is really vacant.
138
2. Each tape has a tape head which can move along the tape, one cell
for each move. It can likewise peruse the cell it as of now filters and
supplant the image in this cell by another image.
139
(b) each tape head composes an image of Γ in the cell it is
currently scanning (this image might be equivalent to
the image right now put away in the cell), and
(c) each tape head either moves one cell to one side, moves
one cell to one side, or remains at the current cell.
where
1. σ is a limited set, called the info letters in order; the clear image ✷
isn't contained in Σ,
2. γ is a limited set, called the tape letter set; this letter set contains the
clear image ✷, and Σ ⊆ Γ,
δ : Q × Γk → Q × Γk × {L,R,N}k.
140
computation step": Let r ∈ Q, and let a1,a2,...,ak ∈ Γ. Besides, let r′ ∈ Q, Γ,
and σ1,σ2,...,σk ∈ {L,R,N} be with the end goal that
δ(r, a1, a2, . . . , ak) = (r′, a′1, a′2, . . . , a′k, σ1, σ2, . . . , σk).. (4.1)
● the top of the I-th tape peruses the image ai, 1 ≤ I ≤ k, at that point
● the top of the I-th tape replaces the filtered image ai by the image ,
and
● the top of the I-th tape moves as per σi, 1 ≤ I ≤ k: on the off chance
that σi = L, at that point the tape head moves one cell to one side; in
the event that σi = R, at that point it moves one cell to one side; in
the event that σi = N, at that point the tape head doesn't move.
Start arrangement: The information is a string over the info letter set Σ.
At first, this info string is put away on the main tape, and the top of this
tape is on the furthest left image of the information string. At first, all
other k − 1 tapes are vacant, i.e., just contain clear images, and the Turing
machine is in the beginning state q.
141
Computation and end: Starting in the beginning design, the Turing
machine plays out a succession of computation ventures as portrayed
previously. The computation ends exactly when the Turing machine
enters the acknowledge state qaccept or the oddball state qreject.
(Consequently, if the Turing machine never enters the states q accept and
qreject, the computation doesn't end.)
See that a string w ∈ Σ∗ doesn't have a place with L(M) if and just if on
input w,
We will tell the best way to develop a Turing machine with one tape, that
chooses string whether or no info string w is known as a palindrome, if
readingw ∈ {a,b}w∗ from left to right gives the same is a palindrome.
142
Review that the outcome as perusing w from option to left. Instances of
palindromes are abba, baabbbbaab, and the unfilled string ǫ .
Beginning of the computation: The tape contains the info string w, the
tape head is on the furthest left image of w, and the Turing machine is in
the beginning state q0.
Thought: The tape head peruses the furthest left image of w, erases this
image and "recollects that" it by methods for a state. At that point the tape
head moves to the furthest right image and tests whether it is equivalent
to the (as of now erased) furthest left image.
● If they are equivalent, at that point the furthest right image is erased,
the tape head moves to the new furthest left image, and the entire
cycle is rehashed.
● If they are not equivalent, the Turing machine enters the oddball
state, and the computation ends.
The Turing machine enters the acknowledge state when the string at
present put away on the tape is unfilled.
We will use the input alphabet Σ = {a, b} and the tape alphabet Γ ={a, b,
✷}. The set Q of states consists of the following eight states:
143
You ought to experience the computation of this Turing machine for some
example contributions, for instance abba, b, abb and the unfilled string
(which is a palindrome).
The info letter set is Σ = {a,b} and the tape letters in order is Γ = {a,b,✷}.
The set Q of states comprises of the accompanying five states:
144
The progress work δ is indicated by the accompanying guidelines:
Once more, you should run this Turing machine for some example inputs.
We will develop a Turing machine with one tape that acknowledges the
language
{anbncn : n ≥ 0}.
Review that we have demonstrated in Section 3.8.2 that this language isn't
context free.
Beginning of the computation: The tape contains the info string w and
the tape head is on the furthest left image of w. The Turing machine is in
the beginning state.
145
Thought: In the past models, the tape letter set Γ was equivalent to the
association of the info letter set Σ and {✷}. In this model, we will add one
image d to the tape letter set. As we will see, this rearranges the
development of the Turing machine. Subsequently, the info letter set is Σ
= {a,b,c} and the tape letter set is Γ = {a,b,c,d,✷}. Review that the
information string w has a place with Σ∗ . The overall methodology is to
part the computation into two phases.
Stage 2: In this stage, we rehash the accompanying: Walk along the string
from left to right, supplant the furthest left a by d, supplant the furthest
left b by d, supplant the furthest left c by d, and stroll back to the furthest
left image. For this stage, we utilize the accompanying states:
146
We comment that Stage 1 is truly important for this Turing machine: If we
exclude this stage, and utilize just Stage 2, at that point the string aabcbc
will be acknowledged.
147
Stage 1. Stroll along the string from left to right, erase the furthest left a,
erase the furthest left b, and erase the furthest right c.
Stage 2. Move the substring of bs and cs one situation to one side; at that
point stroll back to the furthest left image.
The info letters in order is Σ = {a,b,c} and the tape letter set is Γ = {a,b,c,✷}.
148
Moreover, we utilize a state which has the accompanying importance: If
the information string is of the structure aibc, for some I ≥ 1, at that point
after Stage 1, the tape contains the string ai−1✷✷, the tape head is on the
✷ promptly to one side of the as, and the Turing machine is in state q1.
For this situation, we move one cell to one side; on the off chance that we,
at that point read ✷, at that point I = 1, and we acknowledge; else, we read
a, and we reject.
149
We will outline how to build a Turing machine with one tape that
acknowledges the language
Review that we have demonstrated in Section 3.8.2 that this language isn't
context free.
The information letter set is Σ = {a,b,c} and the tape letter set is Γ =
{a,b,c,$,✷}, where the motivation behind the image $ will turn out to be
clear underneath.
Beginning of the computation: The tape contains the info string w and
the tape head is on the furthest left image of w. The Turing machine is in
the beginning state.
Thought: Observe that a string ambnck is in the language if and just if for
each a, the string contains n numerous cs. In light of this, the computation
comprises of the accompanying stages:
Stage 1. Stroll along the information string w from left to right and check
whether w is a component of the language depicted by the normal
articulation a∗ b∗ c∗ . In the event that this isn't the situation, at that point
reject the info string. Something else, go to Stage 2.
150
● Zigzag between the bs and cs; each time, supplant the furthest left
b by the image $, and supplant the furthest right c by the clear
image. On the off chance that, for some b, there is no c left, the
Turing machine dismisses the info string.
See that in this third stage, the string a mbnck is changed to the string
am−1$nck−n.
● Replace each $ by b.
Thus, in this fourth stage, the string am−1$nck−n is changed to the string
am-1bnck−n.
See that the info string ambnck is in the language if and just if the string am-
1bnck−n is in the language. In this way, the Turing machine rehashes Stages
3 and 4, until there are no as left. At that point, it checks whether there are
any cs left; assuming this is the case, it dismisses the info string; else, it
acknowledges the information string.
We trust that you accept that this depiction of the calculation can be
transformed into a conventional portrayal of a Turing machine.
151
In Section 4.2, we have seen two Turing machines that acknowledge
palindromes; the main Turing machine has one tape, while the
subsequent one has two tapes. You will have seen that the two-tape
Turing machine was simpler to get than the one-tape Turing machine.
This prompts the inquiry whether multitape Turing machines are more
impressive than their one-tape partners. The appropriate response is "no":
In words, we take the tape letter set Γ of M, and add, for every x ∈ Γ, the
image ˙x. Additionally, we add a unique image #.
The Turing machine N will be characterized so that any setup of the two-
tape Turing machine M, for instance
152
relates to the accompanying design of the one-tape Turing machine N:
● N strolls along the string to one side until it finds the main spotted
image. (This image shows the area of the head on the principal tape
of M.) N recalls this initially specked image and keeps strolling to
one side until it finds the second spotted image. (This image
demonstrates the area of the head on the second tape of M.) Again,
N recollects this second dabbed image.
153
● At this second, N is still at the second dabbed image. N refreshes
this piece of the tape, by rolling out the improvement that M would
make on its subsequent tape. (This change is given by the progress
capacity of M; it relies upon the present status of M and the two
images that M peruses on its two tapes.)
● N strolls to one side until it finds the primary dabbed image. At that
point, it refreshes this piece of the tape, by rolling out the
improvement that M would make on its first tape.
● In the past two stages, wherein the tape is refreshed, it very well
might be important to move a piece of the tape.
154
appropriate response is "yes". Actually, the accompanying hypothesis
expresses that a wide range of thoughts of "computational cycle" are
same. (We trust that you have increased adequate instinct, so none of the
cases in this hypothesis comes as an astonishment to you.)
4. Java projects.
5. C++ projects.
6. Lisp projects.
At the end of the day, on the off chance that we characterize the thought
of a calculation utilizing any of the models in this hypothesis, at that point
it doesn't make a difference which model we take: All these models give
a similar idea of a calculation.
155
Obviously, in our language, Hilbert asked whether there exists a
calculation that chooses, when given a subjective polynomial condition
(with whole number coefficients, for example,
At the end of the day, all endeavors to give a thorough meaning of the
idea of a calculation prompted a similar idea. Along these lines, PC
researchers these days concur on what is known as the Church-Turing
Thesis:
156
Chapter 5
Complexity Theory
157
● We state that the Turing machine M processes the capacity F as
expected
T, if tM(w) ≤ T(|w|)
for all strings w ∈ Σ∗ .
Then again, the running season of the Turing machine of Section 4.2.2,
which additionally chooses the palindromes, yet utilizing two tapes
rather than only one, is O(n).
158
Hypothesis 5.1.2 Let A be a language (resp. leave F alone a capacity) that
can be chosen (resp. registered) in time T by a calculation of type M. At
that point there is a calculation of type N that chooses A (resp. processes
F) in time T ′, where
It follows from Theorem 5.1.2 that this idea of "polynomial time" doesn't
rely upon the model of computation:
159
5.2.1 Some models
Palindromes
We have seen that there exists a one-tape Turing machine that chooses Pal
in O(n2) time. Subsequently, Pal ∈ P.
A few capacities in FP
160
polynomial time. Utilizing a method called dynamic programming
(which you will learn in COMP 3804), the accompanying outcome can be
appeared:
Leave G alone a chart with vertex set V and edge set E. We state that G is
2-colorable, in the event that it is conceivable to give every vertex of V a
shading with the end goal that
1. for each edge (u,v) ∈ E, the vertices u and v have various tones, and
2. Simply two tones are utilized to shading all vertices.
161
The length of the info G, i.e., the quantity of pieces expected to indicate G,
is equivalent to m2 =: n. We will introduce a calculation that chooses, in
O(n) time, regardless of whether the diagram G is 2-colorable.
The calculation utilizes the tones red and blue. It gives the primary vertex
the shading red. At that point, the calculation considers all vertices that
are associated by an edge to the principal vertex, and shadings them blue.
Presently the calculation is finished with the main vertex; it denotes this
first vertex.
It might happen that there is no vertex I that as of now has a shading yet
that has not been stamped. (As such, every vertex I that isn't stamped
doesn't have a shading yet.) For this situation, the calculation picks a
discretionary vertex I having this property, and tones it red. (This vertex
I is the first vertex in quite a while associated segment that gets a shading.)
1. i has a shading,
162
3. the calculation has checked that every vertex that is associated by
an edge to I has a shading not quite the same as I's tone.
The calculation utilizes two exhibits f(1...m) and a(1...m), and a variable
M. The estimation of f(i) is equivalent to the shading (red or blue) of vertex
I; on the off chance that I doesn't have a shading yet, at that point f(i) = 0.
The estimation of a(i) is equivalent to
163
Chapter 6
Summary
We have seen a few unique models for "preparing" dialects, i.e., handling
sets of strings over some limited letter set. For every one of these models,
we have posed the inquiry which sorts of dialects can be handled, and
which kinds of dialects can't be prepared. In this last section, we give a
concise rundown of these outcomes.
164
We have seen that the class of customary dialects is shut under the
standard activities: If An and B are normal dialects, at that point
1. A ∪ B is ordinary,
2. AB is normal,
3. A∗ is ordinary,
At last, the siphoning lemma for ordinary dialects gives a property that
each standard language has. We have utilized this to demonstrate that
dialects, for example, {anbn : n ≥ 0} are not standard.
165
2. Every setting free language structure in Chomsky ordinary
structure can be converted to an equal nondeterministic pushdown
robot.
We have seen that the class of setting free dialects is shut under the
association, link, and star tasks: If An and B are sans setting dialects, at
that point
1. A ∪ B is sans setting,
2. Abdominal muscle is sans setting, and
3. A∗ is sans setting.
Notwithstanding,
At long last, the siphoning lemma for setting free dialects gives a property
that each setting free language has. We have utilized this to demonstrate
that dialects, for example, {anbncn : n ≥ 0} are not setting free.
166
The Church-Turing Thesis: In Chapter 4, we considered "sensible"
computational gadgets that model genuine PCs. Instances of such gadgets
are Turing machines (with at least one tapes) and Java programs.
Incidentally, all known "sensible" gadgets are same, i.e., can be changed
over to one another. This prompted the Church-Turing Thesis:
Also,
2. there exist dialects, for instance the Halting Problem, that are
enumerable, however not decidable,
Truth be told,
167
The accompanying assertions are same:
3. It isn't known whether there exist dialects that are in NP, however
not in P.
168
1. B has a place with the class NP, and
169
References
● ADLEMAN, L. Two theorems on random polynomial time. In Proceedings of the
Nineteenth IEEE Symposium on Foundations of Computer Science (1978), 75–83.
● AHO, A. V., HOPCROFT, J. E., AND ULLMAN, J. D. Data Structures and Algorithms.
Addison-Wesley, 1982.
● ALON, N., ERDOS¨ , P., AND SPENCER, J. H. The Probabilistic Method. John Wiley
& Sons, 1992.
● ARORA, S., LUND, C., MOTWANI, R., SUDAN, M., AND SZEGEDY, M. Proof
verification and hardness of approximation problems. In Proceedings of the Thirty-
third IEEE Symposium on Foundations of Computer Science (1992), 14–23.
● BABAI, L. E-mail and the unexpected power of interaction. In Proceedings of the Fifth
Annual Conference on Structure in Complexity Theory (1990), 30–44.
● BACH, E., AND SHALLIT, J. Algorithmic Number Theory, Vol. 1. MIT Press, 1996.
170
● BEAME, P. W., COOK, S. A., AND HOOVER, H. J. Log depth circuits for division and
related problems. SIAM Journal on Computing 15, 4 (1986), 994–1003.
● BLUM, M., CHANDRA, A., AND WEGMAN, M. Equivalence of free boolean graphs
can be decided probabilistically in polynomial time. Information Processing Letters 10
(1980), 80–82.
171