TOC Full Note For PU
TOC Full Note For PU
Introduction
Let A and B be sets. We say that B is a subset of A, and write B ⊆ A, if every element of B is
an element of A. Two sets A and B are equal (we write A = B) if their members are the same.
In practice, to prove that A = B, we prove A ⊆ B and B ⊆ A. A set with no element is called
an empty set, also called a null set or a void set, and is denoted by ϕ. We define the some
operations on set as follows:
Union of A and B: A⋃B = {x |x ∈ A or x ∈ B }
Intersection of A and B: A⋂B = {x |x ∈ A and x ∈ B }
Complement of B in A: A – B = {x |x ∈ A and x ∉ B }
B. Relations
The concept of relation is similar to the real life that compares one with the other. The
relation made a pair of objects that is true in some cases and false in others; the statement ‘x
is less than y’ is true if x = 3 and y = 4, for example, but false if x = 3 and y = 2.
Properties of relations
The relations have generally three properties: reflective, symmetric, and transitive. A relation
R in a set S is called an equivalence relation if it is reflective, symmetric, and transitive.
A relation R in a set S is reflective if xRx for every x in S.
A relation R in a set S is symmetric if for x, y in S, yRx whenever xRy.
A relation R in a set S is transitive if for x, y and z in S, xRz whenever xRy and yRz.
Types of functions
One – to – one (or injective) function: The function f : X → Y is said to be one-to-one
if different elements in X have different images i.e. f(x1) ≠ f(x2) when x1 ≠ x2. For
example: f : Z → Z given by f(n) = 2n is one-to-one but not onto.
Onto (subjective) function: The function f : X → Y is said to be onto if every element
y in Y is the image of some element x in X i.e. one-to-many relations.
One – to – one correspondence (bijection): The function f : X → Y is said to be one–
to–one correspondence if f is both one-to-one and onto.
A. Alphabet
An alphabet is a non-empty set of symbols or letters, e.g. characters {a, b, . . . , z} or digits{0,
1} or ASCII characters etc. The alphabets are represented by the symbol Σ. Alphabets are
important in the use of formal languages, automata and semi-automata. In most cases, for
defining instances of automata, such as deterministic finite automata (DFAs), it is required to
specify an alphabet from which the input strings for the automaton are built.
For example a common alphabet is {0, 1}, the binary alphabet. A finite string is a finite
sequence of letters from an alphabet. For example, if we use the binary alphabet {0, 1}, the
strings (ε, 0, 1, 00, 01, 10, 11, 000, etc.) would all be in the Kleene closure (i.e. set of
alphabets) of the alphabet (where ε represents the empty string). An infinite sequence of
letters may be constructed from elements of an alphabet as well.
B. Strings
A string (or word) is a finite sequence of symbols chosen from some alphabet. For example:
01101 and 111 are strings from the binary alphabet Σ = {0, 1}.
The string with zero occurrences of symbols is called empty string (denoted by ε).
The number of positions for symbols in the string is called length of a string. The length
of string is noted as: w: |w|. For example: |011| = 3 and | ε | = 0
Generation of strings
If Σ is an alphabet, we can express the set of all strings of a certain length from that alphabet
by using the exponential notation: Σk: the set of strings of length k, each of whose is in Σ.
Examples:
Σ0: {ε}, regardless of what alphabet Σ is. That is ε is the only string of length 0
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 2
Kleen star
Σ*: The set of all strings over an alphabet Σ
{0, 1}* = {ε, 0, 1, 00, 01, 10, 11, 000, . . .}
Σ* = Σ0 ∪ Σ1 ∪ Σ2 ∪ . . .
The symbol ∗ is called Kleene star and is named after the mathematician and logician
Stephen Cole Kleene.
Note:
Σ+ = Σ1 ∪ Σ2 ∪ . . .
Thus: Σ* = Σ+ ∪ {ε}
String concatenation
Strings can be concatenated yielding another string using the binary operation (.), called
concatenation on Σ*. If a1a2a3 . . . an and b1 b2 . . . bm are in Σ*, then a1a2a3 . . . an . b1b2 . . . bm
= a1a2a3 . . . an b1 b2 . . . bm.
If x are y be strings then x.y denotes the concatenation of x and y, that is, the string formed
by making a copy of x and following it by a copy of y. For examples:
x = 01101 and y = 110 then xy = 01101110 and yx = 11001101
For any string w, the equations εw = wε = w hold.
C. Language
Languages define a set of alphabets and associated meanings. A formal language L over an
alphabet Σ is a subset of Σ*, that is, a set of words over that alphabet. Sometimes the sets of
words are grouped into expressions, whereas rules and constraints may be formulated for the
creation of 'well-formed expressions'. For example, the expression 0(0 + 1)*1 represents the
set of all strings that begin with a 0 and end with a 1. A formal language is often defined by
means of a formal grammar such as a regular grammar or context-free grammar, also called
its formation rule.
Examples: The language may be Programming language C, English or French and others
are:
The language of all strings consisting of n 0s followed by n 1s (n ≥ 0): {ε, 01, 0011,
000111, . . .}
The set of strings of 0s and 1s with an equal number of each: {ε, 01, 10, 0011, 0101,
1001, . . .}
Σ* is a language for any alphabet Σ
∅, the empty language, is a language over any alphabet
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 3
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 4
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 1
Example
Let L1 = the set of all string of 0’s and 1’s ending in 00, and then the equivalent regular
expression can be described as:
Any string in L1 is obtained by concatenating any string over {0, 1} and the string 00.
Since, {0, 1} is represented by 0 + 1. Hence L1 is represented by (0 + 1)*00.
L2 = the set of all string of 0’s and 1’s beginning with 0 and ending with 1, and then the
equivalent regular expression can be described as:
Any string in L2 is obtained by concatenating 0, any string over {0, 1} and 1. Thus L2
can be represented by 0(0+1)*1.
L3 = {, 11, 1111, 111111 …}, and then the equivalent regular expression can be described as:
Any element of L3 is either , or a string of even number of 1’s so L3 can be represented
by (11)*.
Notes:
* =++
(0,1)* = (0+1)* but (0+1) (0,1)
0+1 = Either 0 or 1 but not both. Thus, 0* + 1* = Either string of zeros or string of one.
01 = First consume 0 then 1. Thus, 0*1* = First string of zeros then the string of one.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 2
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 3
An automation in which the output depends only on the input is called automation without a
memory.
An automation, in which the output depends on input and the state as well, is called
automation with finite memory.
An automation in which the output depends only on state of the machine is called Moore
Machine.
An automation in which the output depends on the state as well as on the input at any instant
of time is called a Mealy Machine.
The sets of strings recognized by finite automata are called regular expression. The sets built up
from the null set, the empty string, and singleton (a set by single digit) strings by concatenations,
unions and Kleene closures, in arbitrary order is called regular sets.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 4
State: A uniquely identifiable set of values measured at various points in a digital system.
Next State
o The state to which the state machine makes the next transition, determined by the
inputs present when the device is clocked.
o At the termination of the input string the state of the FA must reach to any one final
state among the various possible final states.
Notation for FA
There are two preferred notation for describing the automata, which are:
The state diagram accept the string w in * if there exists a path which originates from some
initial states, goes along the arrows and terminate at some final state with path value w.
Example:
Let we have the transition function as: (q0, 0) q0,
(q0, 1) q1, (q1, 0) q0, (q1, 1) q1, with the
initial state q0 and the final state q1, then the
corresponding state diagram is given by:
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 5
Language of DFA
The language of DFA, A = (Q, , , q0, F) is defined by L(A) = { W: (q0, W) is in F} i.e.
the language of ‘A’ is the set of string ‘W’ that takes the start state q0 to one of the accepting
state. If language ‘L’ is L(A) for some DFA ‘A’, then we say ‘L’ is a regular language.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 6
Notice that the only difference between NFA & DFA is that the returns a set of value in
case of NFA and a single value in case of DFA.
EXERCISE – I
Formally we represent an -NFA by A = (Q, , , q0, F), where all components have their
same interpretation as for NFA except is now a function that takes as arguments: -
i. All the states lie in Q after transition.
ii. A member of {} i.e. either an input symbol or the symbol for empty string
and it cannot be the member of the alphabet only.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 7
-Closure
A string ‘w’ in will be accepted by -NFA if there exist at least one path corresponding ‘w’,
which start in an initial state and ends in one of the final states. Since this path may be formed
by - transitions as well as non-- transitions. We may require defining a function - Closure
(q), where q is a state of the automata.
The function - Closure (q) is defined as “the set of all those stats of the automata (i.e. -NFA)
which can be reached from q on a path labeled by i.e. without consuming any input symbols.
Note that if there is no for any state to transit then the - Closure (q) function for that state
will be the same state.
EXERCISE – II
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 8
Note: The representation of a given regular expression may have multiple isomorphic graphs
in conversion of equivalent deterministic finite automata.
Example
The given regular expression (R) = (1+01)* (0+00) (1+10)*
Now, the equivalent Finite Automata (FA) can be constructed as:
Since, the resulting Finite Automata is NFA, which can be converted in to the DFA as follow.
Let ’ be the new transition function for the equivalent DFA.
Thus,
’ (q0, 0) = (q0, 0) = {q5, q6, qf} (New state) → qr (Say)
’ (q0, 1) = (q0, 1) = {q0}
’ [(q5, q6, qf), 0] = (q5, 0) (q6, 0) (qf, 0) = {ϕ} {qf} {q7} = {q7, qf} (New state)
→ qs (Say)
’ [(q5, q6, qf), 1] = (q5, 1) (q6, 1) (qf, 1) = {q0} {ϕ} {qf} = {q0, qf} (New state)
→ qt (Say)
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 9
’ [(q5, q6, q7, qf), 0] = (q5, 0) (q6, 0) (q7, 0) (qf, 0) = {ϕ} {qf} {ϕ} {q7} =
{q7, qf}
’ [(q5, q6, q7, qf), 1] = (q5, 1) (q6, 1) (q7, 1) (qf, 1) = {q0} {ϕ} {qf} {qf} =
{q0, qf}
Thus, the new transition ’ of the DFA can be represented by the transition table as:
Now,
i. Acceptance of string 0011100
(q0, 0011100) = (qr, 011100)
= (qs, 11100)
= (qf, 1100)
= (qf, 100)
= (qf, 00)
= (q7, 0)
= (ϕ, )
This implies that the string is not acceptable by the Finite Automata.
EXERCISE – III
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 10
Example
For the given Transition table, the Finite Automata M = [{q0, q1, q2,
q3}, {0, 1}, , q0, {q3}] where can be defined as:
Step1: Defining the equation for each state with reference to the input symbol (Only incoming
arrows)
Here,
q0 = q0.1 + q1.1 + q2.1 + ---- I
q1 = q0.0 ---- II
q2 = q1.0 ---- III
q3 = q2.0 + q3.0 + q3.1 ---- IV
EXERCISE – IV
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 11
A. Union
Let M1 = (Q1, 1, 1, q1, F1) be NFA that accept a
regular language L = L (M1) and M2 = (Q2, 2, 2, q2,
F2) be another NFA that accept a regular language L
= L (M2). Here, we assume that Q1 and Q2 are two
disjoint sets.
, or
B. Concatenation
Let M1 = (Q1, 1, 1, q1, F1) be NFA that accept a
regular language L = L (M1) and M2 = (Q2, 2, 2, q2,
F2) be another NFA that accept a regular language L =
L (M2). Now, we construct NFA M = (Q, , , q, F),
such that it can accepts L = L (M) = L (M1) ● L (M2).
Where,
• Q = Q1 Q2
• = Set of input states = 1 2 {}
• = 1 2 [(F1, ) q2]
• q = Start state = q1
• F = F2.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 12
Where,
• Q = Q1 {q}
• = Set of input states = 1 {}
• = 1 [(q, ) q1, (q1, ) F1, (F1, ) q]
• q = Start state
• F = F1 {q}.
Here, M consists of the states of M1 and all the transition of M1 and also, any final state of
M1 is the final state of M. In addition, M has a new initial state ‘q’. The new initial state is
also final so that is accepted. From it, the state q1 can be reaches on the input , so that the
operations of M1 can be initiated after M has been started in state ‘q’. Finally, the transition
on input are added from each final state of M1 back to q1, once a string in L (M1) has been
read, computation can resume from the initial state of M1.
It follows the inspection of M that if w L (M) then either w = or w = w1, w2… wk, for k
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 13
Statement
Let M = (Q, , , q0, F) be a finite automaton with n states that accepts a regular language L. Let
w be any string such that w L and |w| n then there exists x, y, z such that
1. w = xyz,
2. y (Given assumption)
3. |xy| n (|xy| = n when z = ) and
4. xyiz L, for all i 0.
Here,
n = Number of states.
w = Any string w L
|w|= m= Length of string in w (if w = abcd, |w| = 4.)
yi = Read i at infinite times.
Proof
Let w = a1, a2, a3 … am, (m is length of string)
(q0, a1, a2, a3 … ai) = qi, (i = 1, 2, 3 … n) = Sequence of states in the path with path value
‘w’.
Q’ = {q0, q1, q2 … qn)
Now, the input string ‘w’ can be decomposed into three substrings as:
x = a1, a2, a3 … aj
y = aj+1, … ak
z = ak+1, … am
Here, on reading the string xyz (i.e. |w| = m =3), a new states are added on the existing states (i.e.
states (n) = 4). But by the definition of pumping lemma, we have, |w| n. Thus, by using Pigeon-
Hole principle, there must coincides at least two states in Q, as there are only n distinct states
defined but on applying the input string the states becomes n+1. Thus, among various pair of
repeated states, we take first as qj and qk (i.e. qj = qk) for merging & hence the path with value w
in the transition diagram of M is shown in figure below.
For |xy| n
The verification of the case is obvious by the help of the given FA defined for the regular string
‘w’. Since, w = xyz = a1, a2 …aj, aj+1…ak, ak+1… am, thus we can write 1 j < k m and this
implies that |xy| n.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 15
For example: s a/b/SS/MM, M p/D/q, D x/y/z etc. This implies that each non-
terminal is separately or individually defined so they are context free on their replacement.
For example: aaSbc aab/Sbb/aaS, S aa/bb etc. In the first production rule, it implies
that on replacing the aaSbc, we have to consider the present context of S present ahead and
back side also.
The production rules or productions or rewriting rules is the kernel of any grammar and
language specification. The productions are used to derive one string over V from another
string. In the application of the production rule, the reverse substitution is not permitted i.e. if
S AA then AA S is not possible.
Note: Is there any relationship between the regular expression and the context free
grammar? Define by suitable example.
Solution
We know from the Chomsky hierarchy of grammar, all the regular expressions can be described on
the basis of the Context Free Grammar (CFG), but the reverse cannot be true. This can be verified
with the following example. Let any regular expression, R = a (a*+b*) b, then the operating rules
for R can be generated as:
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 1
Numerical – 1: Write a CFG to generate only the palindrome with the input symbol, = {0, 1}.
Solution
Let G = (V, , R, S) be a context-free grammar (CFG) that can describe the string of palindrome
only.
Where V = Finite set of non-terminals or variables = {S}
= Finite set of terminals = {0, 1, }
S = Starting non-terminal symbol and SV
R = Production rule, which can be describe as: S 0S0/1S1/.
Note: The production rule will be S 0S/1S/ or S SS/1/0/ for the palindrome and non-
palindrome string generation but not only the palindrome as above.
Derivations
The process of deriving the required string over (V)* using the given production rule on the
existing string is called the derivation. The string generated by the most recent application of
production is called the working string. The derivation of a string completed when the working string
cannot be modified. The different derivations results are quite different in different sentential form
such as context sensitive grammar, but for a context free grammar, it really doesn’t make much
difference in what order you expand the variable.
Hence, we call the sequence of the derivation in G of m from 1 in the following manner as:
If we derive the string by more than one sequence of operations using given production rules
then the string is called derivable string and is denoted by
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 2
B. Rightmost derivation
A derivation is called a rightmost derivation if we apply a production only to the
rightmost variable at every step. For example, if the given production rules of any CFG are
as: S 0B/1A, A 0/0S/1AA, & B 1/1S/0BB then any
given string w = 00110101 can be derived using the rightmost
derivation in following manner:
S 0B
00BB (As, B 0BB)
00B1 (As, B 1)
001S1 (As, B 1S)
0011A1 (As, S 1A)
00110S1 (As, A 0S)
001101A1 (As, S 1A)
00110101(As, A 0)
A. Derivation Tree
It is easy to visualize derivation in context-free languages as we can represent derivations
using tree structure. Such tree representing derivations are called derivation trees or parse
trees. A parse tree is an ordered tree in which nodes are labeled with the left side of
production (i.e. non-terminals only) and the children of the nodes (i.e. leaves) represent its
corresponding right-sides (i.e. the terminals or non-terminals or both).
A derivation tree for a CFG, G = (V, , R, S) is a tree satisfying the following conditions:
1. Every vertex has label, which is a variable or terminal or empty string ().
2. The root has label ‘S’ (i.e. start symbol).
3. The label of an internal vertex is a variable.
4. Each vertex of variable is extended towards the leaf-node or terminals using the
production rule (R).
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 3
Here, the tree is a derivation tree with yield 00110101. The yield of a derivation tree is the
concatenation of the labels of the leaves without repetition in the left-to-right ordering.
Exercise
Consider a CFG, S XX, & X XXX/bX/Xb/a, then find the parse tree for any given string
w = bbaaaab.
Consider the grammar G, with production S aXY, X bYb, & Y X/c, then find the
parse tree for any string w = abbbb.
Note: The derivation tree does not specify the order in which we apply the production for getting the
required string. So, same derivation tree can include several derivations. But, in general we use
leftmost derivation than that of the rightmost derivation.
A Context Free Grammar, G is ambiguous if there exists some terminal string w L (G) is
ambiguous. The terminal string w is ambiguous if there exists two or more leftmost derivations
for single w. In other word, the single terminal string w is ambiguous if it may be the yield of two
derivation trees.
For example: Consider G = ({S}, {a, b, +, }, R, S), where R consists of S S+S/S*S/a/b.
Now, we have two derivation trees for the terminal string w = a + a b as given below: -
Leftmost derivation – I
S S+S
a + SS
a+aS
a+ab
Leftmost derivation – II
S SS
S + SS
a+SS
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 4
Since, here exists two leftmost derivation trees for a same terminal string w = a + a b (i.e. w
is ambiguous). Hence, we can conclude that the given grammar G is ambiguous.
Exercise
Consider any grammar G, with the production rule S SbS/a. Show that the given
grammar is ambiguous. Assume the terminal string w = abababa.
Note: Any Context Free Grammar (CFG) can be reduced into the Chomsky normal form
(CNF) but the converse is not true as CNF is a restricted form of CFG.
Proof idea:
1. Show that any CFG G can be converted into a CFG G′ in Chomsky normal form;
2. Conversion procedure has several stages where the rules that violate Chomsky normal
form conditions are replaced with equivalent rules that satisfy these conditions;
3. Order of transformations: (1) add a new start variable, (2) eliminate all null production,
(3) eliminate unit-productions, (4) Elimination of terminals on R.H.S.
4. Check that the obtained CFG G′ define the same language as the initial CFG G by
restricting the number of variables on R.H.S.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 5
Solution
Since in CNF, the restriction is that every nodes on R.H.S. has either two internal vertices
(i.e. two non-terminals only) or a single leaf (i.e. exactly one terminal). Also, there are no null
productions or unit production. Thus, the production rule can be constructed as: -
B b & D d are in R’.
S aAD gives S CaAD, where Ca a and also S CaAD, gives S CaC1, where C1
AD.
A aB gives A CaB, where Ca a
A bAB gives A BAB, where B b and also A BAB gives A BCb, where
Cb AB.
Hence, Let G’ = (V’, , R’, S’) be newly constructed CNF equivalent to the given CFG,
where
V’ = Set of non-terminals only = {S, A, B, D, Ca, Cb, C1}
= Set of terminals only = {a, b, d}
S’ = Start state = S
R’ = Set of production rule that can be defined as: S CaC1, A CaB, A BCb, Ca a,
Cb AB, C1 AD, B b & D d.
For example: If the given production rules of context-free grammar, G is given as: S
aBASA/aBA, A aAA/a, & B bBB/b then construct the equivalent CNF.
Solution
We can construct the CNF for the given CFG by defining the production rule as:
B b & A a.
S aBASA gives S CaBASA, where Ca a
S CaBASA, gives S CaC1SA, where C1 BA.
S CaC1SA, gives S CaC1C2, where C2 SA.
S CaC1C2, gives S CaC3, where C3 C1C2.
S aBA gives S CaBA, where Ca a
S CaBA gives S CaC1, where C1 BA.
A aAA gives A CaAA, where Ca a.
A CaAA gives A CaC4, where C4 AA.
B bBB gives B CbBB, where Cb b.
B CbBB gives B CbC5, where C5 BB.
Hence, Let G’ = (V’, , R’, S’) be newly constructed CNF equivalent to the given CFG,
where
V’ = Set of non-terminals only = {S, A, B, Ca, Cb, C1, C2, C3, C4, C5}
= Set of terminals only = {a, b}
S’ = Start state = S
R’ = Set of production rule that can be defined as: S CaC3/CaC1, Ca a, C1
BA, C2 SA, C3 C1C2, S CaC1, A CaC4, C4 AA, B CbC5,
Cb b & C5 BB.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 6
Solution
We can construct the GNF for the given CFG by defining the production rule.
Here, we have
S abaSa/aba
Now, let us introduce new variables A and B and productions A a & Bb and substitute
into the given grammar as
S aBASA/aBA
A a &
Bb
Hence, Let G’ = (V’, , R’, S’) be newly constructed GNF equivalent to the given CFG,
where
V’ = Set of non-terminals only = {S, A, B}
= Set of terminals only = {a, b}
S’ = Start state = S
R’ = Set of production rule that can be defined as: S aBASA/aBA, A a & Bb.
Example: If the given production rules of context-free grammar, G is given as: S AB, A
aA/bB/b & Bb then construct the equivalent GNF.
Solution
We can construct the GNF for the given CFG by defining the production rule as: -
S AB gives S aAB, (Since A aA)
S AB gives S bBB, (Since A bB)
S AB gives S bB, (Since A b)
A aA/bB/b
Bb
Now, Let G’ = (V’, , R’, S’) be newly constructed GNF equivalent to the given CFG.
Where
V’ = Set of non-terminals only = {S, A, B}
= Set of terminals only = {a, b}
S’ = Start state = S
R’ = Set of production rule that can be defined as: S aAB/bBB/bB, A aA/bB/b &
Bb.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 7
Example: If the given production rules of context-free grammar, G is given as: S AB, A
a, BC/b, CD, D E & Ea, then remove the unit productions.
Solution
The given grammar contain the following unit productions
BC,
CD, &
D E
Now, the useless production for the given CFG can be distinguished by analyzing the not
reachable non-terminals from the start symbol. Here, the non-terminals are A & B are only
reachable from start symbol and others are useless productions i.e. C b, E a & D a are
useless or never be used. Hence, we can eliminate them.
Now, Let G’ = (V’, , R’, S’) be newly constructed CFG, which is completely reachable
grammar.
Where
V’ = Set of non-terminals only = {S, A, B}
= Set of terminals only = {a, b}
S’ = Start state = S
R’ = Set of production rule that can be defined as: S AB, A a & Ba/b.
Example: If the given production rules of context-free grammar, G is given as: S A/bb, A
B/b, & BS/a, then remove the unit productions.
Solution
The given grammar contain the following unit productions
SA,
AB, &
B S
Also, the terminals given by the non-terminals are
B a, gives S a & hence A a
A b, gives S b & hence B b
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 8
Example: If the given production rules of context-free grammar, G is given as: S aA, & A
b/, then remove the null productions.
Solution
The given grammar has null productions A. So, put null () in place of ‘A’ at the right
side of productions and add the resultant productions to the grammar.
Thus,
S aA, gives S a/ & hence S a
Here, the production of the non-terminals A is useless as they are not included on the start
symbol. Hence the final production rule is only S a.
Example: If the given production rules of context-free grammar, G is given as: S ABAC,
A aA/, B bB/, & C c then remove the null productions.
Solution
The given grammar contains the following null productions:
A
B
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 9
Again, put null () in place of ‘B’ at the right side of productions, we get
S AAC, S AC, S C, and B b
Thus, the final simplified grammar become: S ABAC/BAC/ABC/BC/AAC/AC/C, A a
and B b
Hence, Let G’ = (V’, , R’, S’) be newly constructed null-free CFG.
Where
V’ = Set of non-terminals only = {S, A, C}
= Set of terminals only = {a, b}
S’ = Start state = S
R’ = Set of production rule as: S ABAC/BAC/ABC/BC/AAC/AC/C, A a and B
b
Exercise: Consider the following grammar and remove the null productions.
S aSa /bSb/
S a /Xb/aYa/, X Y/, Y b/.
Example: Eliminate the useless productions from context-free grammar, G where V = {S, A,
B, C} and = {a, b} with the productions S aS/A/C, A a, B aa, & C aCb.
Solution
First identify the set of variables that can lead to a terminal string.
i.e. A a,
B aa, and
S aS/A
Since, C cannot generate any string so we remove C. Now, we get a new context-free
grammar, G1 having V1 = {S, A, B} and 1 = {a} with the production rule R1 defined
as: S aS/A, A a, B aa.
Next step is the elimination of the variables that cannot be reached from the start variable
or symbol. For this, we draw a dependency graph (or transition diagram) where its
vertices are labelled with non-terminals and an edge between C and D is connected if and
only if there is a production of the form C →xDy.
Here, the non-terminal B is useless. Hence the specified grammar is G’ = (V’, ’, R’, S’)
Where
V’ = Set of non-terminals only = {S, A}
’ = Set of terminals only = {a}
S’ = Start state = S
R’ = Set of production rule as: S aS/A, A a.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 10
A. Union
Let L1 and L2 be two context-free languages generated
by the Context-free Grammar G1 = (V1, 1, R1, S1) and
G2 = (V2, 2, R2, S2) respectively. Now, we construct
new language ‘L (G)’ using the grammar G = (V, , R,
S), such that it can accepts L(G1) L(G2).
Where,
• V = V1 V2 {S}
• = Set of input states = 1 2
• R = R1 R2 {S S1/S2}
• S = Start state.
B. Concatenation
Let L1 and L2 be two context-free languages generated
by the Context-free Grammar G1 = (V1, 1, R1, S1) and
G2 = (V2, 2, R2, S2) respectively. Now, we construct
new language ‘L (G)’ using the grammar G = (V, , R,
S), such that it can accepts L(G1) L(G2).
Where,
• V = V1 V2 {S}
• = Set of input states = 1 2
• R = R1 R2 {S S1.S2}
• S = Start state.
Now, let us choose string w1 1 and w1 2. We know that , but in the
above grammar G, S S1S2, so S will lead the concatenation of the string w1 & w2 i.e. w1 w2
and the language will be L1L2. Since L1 & L2 are CFL so L1L2 is also CFL.
C. Kleene closure
Let L1 be a context-free languages generated by the
Context-free Grammar G1 = (V1, 1, R1, S1). Now, we
construct new language ‘L (G)’ using the grammar G =
(V, , R, S), such that it can accepts Kleene Star of the
language L (or L*).
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 11
Here, R follows all the properties of CFG as R1 is a production of given CFG and S S1,
S, & S S1S1 also fulfill the requirement so we say that G is a CFG that can generate the
context-free language L*.
While the pumping lemma is often a useful tool to prove that a given language is not context-
free, it does not give a complete characterization of the context-free languages. If a language
does not satisfy the condition given by the pumping lemma, we have established that it is not
context-free. On the other hand, there are languages that are not context-free, but still satisfy the
condition given by the pumping lemma. There are more powerful proof techniques available,
such as Ogden's lemma, but also these techniques do not give a complete characterization of the
context-free languages.
Statement
Let L be a context-free language and n be the length of the string or pumping length such that:
i. Every z L with |z| = n can be written as uvwxy for some u, v, w, x & y. (i.e. any string
z can be decompose into five sub-strings u, v, w, x & y)
ii. |vx| 1 (i.e. may one null at a time but not both or must have at least one string because we
have to pump at least one sub-string to generate the new infinite number of string)
iii. |vwx| ≤ n (i.e. if u & y are , then vwx = n)
iv. uvkwxky L for all k 0 (i.e. generate infinite number of string by setting any value of k)
Proof
To prove the theorem, we consider a CFG
whose productions are given by: S AB, A
aB/a, B bA/b.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 12
Similarly,
S → uBy
→ uvBxy (since, B → uBx)
→ uvuBxxy (since, B → uBx)
→ uvvwxxy (since, B → w)
Thus, S →uv2wx2 y, where k = 2 and so on.
Hence, we can conclude that S → uBy gives This proves the theorem.
Solution
Let us consider ‘L’ is a context-free language. Also, let z = apbpcp, z L.
Now, according to the pumping lemma of the CFL, the string ‘z’ can be decomposes into u,
v, w, x, & y as follow:
u = ar
v = as (s > 0)
w = ap-(r+s)
x = bt (t >0)
y = b(p-t)cp
Solution
Let us consider ‘L’ is a context-free language. Also, let z = ap, z L.
Now, according to the pumping lemma of the CFL, the string ‘z’ can be decomposes into u,
v, w, x, & y as follow:
u = aα, v = aβ (β > 0), w = aq-(α+β), x = aγ (γ >0), & y = ap-(q+γ)
Now, using the pumping lemma, z = uvkwxk y; k 0, we have
z = uv2wx2y = aα (aβ)2 aq-(α+β) (aγ)2 ap-(q+γ)= aα a2β aq-(α+β) a2γ ap-(q+γ) = ap+α+β ap
Here, our assumption z L, contradict with our result.
Hence, we can conclude that the language L = {an | n 0} is not context-free language.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 13
Here is an input tape from which the finite control reads the input and same time it reads the
symbol from the stack. Now it depends upon the finite control that what is the next state and
what will happen with the stack. In one transition, the pushdown automata do the following:
Consume the input symbol that it uses in the transition. If the input is , then there is no
consumption of input symbol.
Goes to the new state, which may or may not be the same as the previous state.
Replace the symbol at the top of the stack by any string. It could be the same symbol that
appeared at the top of the stack.
Mathematically, the pushdown automata, is six tuple, can be defined as P = (Q, , , , q0, F).
Where
– Q is the non-empty finite set of states.
– is the non-empty finite set of input symbol.
– is a finite non-empty set of pushdown or stack symbol
– q0 is the start state, q0 Q.
– F is a set of final states, F Q.
– is a transition function which maps (Q x * x *) → (Q x *). Formally takes an
argument, a triple as (q, a, x) where,
q is a state in Q.
a is an input symbol.
x is a stack symbol that is a member of .
The output of is a finite set of pair (p, r), where ‘p’ is the new state and r is the string of
the stack symbol that replaces x at the top of the stack.
Move of PDA
The pushdown automata consist of following moves:
(q, a, x) → (p, y) means that whenever PDA is in state q with x, the top of stack, may
read ‘a’ from the input tape, replace ‘x’ by ‘y’ on the top of stack and enter state p.
(q, a, ) → (p, y), indicates the pushes ‘y’ on the top of the stack.
(q, a, y) → (p, ), indicates the pops a symbol ‘y’ from the top of the stack.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 14
Solution
To solve the problem, we have to first analyze the given language so that we can generate the
PDA for it. As we can see, the string must have equal number of ‘a’ and ‘b’ and the order of
placement is number of ‘a’ followed by number of ‘b’. Thus, to design such PDA, we have to
read the number of ‘a’ and then the same number of ‘b’ and finally, the input string and the stack
must be empty for the accepting condition. To do so, we have to push first the whole string of ‘a’,
one by one, into the empty stack and when the string of ‘b’ start to read, pop one by one the
string of ‘a’ on stack in each ‘b’ read. Hence, when the input string is consumed, there is nothing
on the stack also and that we termed as the accepting condition of the PDA.
Now, let the required PDA for the given string be: P = (Q, , , , q0, F), Where
– Q = finite set of states = {q0, q, qf }
– = finite set of input symbol = {a, b}
– = finite set of stack symbol = {a}
– q0 = start state
– F = set of final states = { qf }
– = transition function which can be
defined as
i. (q0, a, ) → (q1, a)
ii. (q1, a, a) → (q1, aa)
iii. (q1, b, a) → (q1, )
iv. (q1, , ) → (qf, )
For example, let w = aaabbb is any string then we can process it using PDA defined above in
tabular form:
Present Unread Present Next States Next Stack Transition
States Input Stack symbols Used
symbols
q0 →aaabbb q1 a I
q1 →aabbb a q1 aa II
q1 →abbb aa q1 aaa II
q1 →bbb aaa q1 aa III
q1 →bb aa q1 a III
q1 →b a q1 III
q1 → qf IV
Example: Design a PDA which accepts a language L = (w {a, b}* where w has equal number
of a & b).
Solution
As we can see, the string must have equal number of ‘a’ and ‘b’ with any order of placement.
Hence, let us consider P = (Q, , , , q0, F) be the required PDA for the given string where
– Q = finite set of states = {q0, qf }
– = finite set of input symbol = {a, b}
– = finite set of stack symbol = {a, b}
– q0 = start state
– F = set of final states = { qf }
– = transition function which can be defined as
i. (q0, a, ) → (q0, a)
ii. (q0, b, ) → (q0, b)
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 15
For example, let w = abbbaaba is any string then we can process it using PDA defined above
in tabular form:
Present Unread Input Stack top Next New Stack- Transition
States States top Used
q0 →abbbaaba q0 a I
q0 →bbbaaba a q0 VI
q0 →bbaaba q0 b II
q0 →baaba b q0 bb IV
q0 →aaba bb q0 b V
q0 →aba b q0 V
q0 →ba q0 b II
q0 →a b q0 V
q0 → qf VII
Example: Design a PDA which accepts a language L = (wcwT, w {a, b}* i.e. w belong to any
strings of a & b).
Solution
As we can see, the string of any number of ‘a’ or ‘b’ must be complement on both side of ‘c’.
Hence, let us consider that P = (Q, , , , q0, F) be the required PDA for the given string where
– Q = finite set of states = {q0, q1, qf}
– = finite set of input symbol = {a, c, b}
– = finite set of stack symbol = {a, b}
– q0 = start state
– F = set of final states = { qf }
– = transition function which can be defined as
i. (q0, a, ) → (q0, a)
ii. (q0, a, a) → (q0, aa)
iii. (q0, a, b) → (q0, ab)
iv. (q0, b, b) → (q0, bb)
v. (q0, b, a) → (q0, ba)
vi. (q0, c, a) → (q1, a) i.e. only state change.
vii. (q0, c, b) → (q1, b) i.e. only state change.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 16
For example, let w = aabcbaa is any string then we can process it using PDA defined above in
following tabular form:
Present Unread Input Stack Next New Stack- Transition
States top States top Used
q0 → aabcbaa q0 a I
q0 → abcbaa a q0 aa II
q0 → bcbaa aa q0 baa VIII
q0 → cbaa baa q1 baa IX
q1 → baa baa q1 aa X
q1 →aa aa q1 a V
q1 →a a q1 V
q1 → qf XI
Solution
As we can see, the string must have just half number of ‘a’ than that of ‘b’ with order of
placement is ‘a’ followed by ‘b’. Hence, let us consider P = (Q, , , , q0, F) be the required
PDA for the given string where
– Q = finite set of states = {q0, q1, qf }
– = finite set of input symbol = {a, b}
– = finite set of stack symbol = {a}
– q0 = start state
– F = set of final states = { qf }
– = transition function which can be defined as
i. (q0, a, ) → (q1, aa)
ii. (q1, a, a) → (q1, aaa)
iii. (q1, b, a) → (q1, )
iv. (q1, , ) → (qf, )
For example, let w = aabbbb is any string then we can process it using PDA defined above in
following tabular form:
Present Unread Input Present Next States Next Stack Transition
States Stack symbols Used
symbols
q0 →aabbbb q1 aa I
q1 →abbb aa q1 aaaa II
q1 →bbbb aaaa q1 aaa II
q1 →bbb aaa q1 aa III
q1 →bb aa q1 a III
q1 →b a q1 III
q1 → qf IV
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 17
Example: Construct a PDA equivalent to the following CFG: S 0BB, B 0S/1S/0. Also,
test whether o10000 is accepted by PDA or not.
Solution
We can define the PDA as: P = (Q, , , , q0, F) where
– Q = finite set of states = {q0, qf }
– = finite set of input symbol = {0, 1}
– = finite set of stack symbol = {S, B, 0, 1}
– q0 = start state
– F = set of final states = { qf }
– = transition function which can be defined by the following rules:
i. (q0, , ) → (qf, S)
ii. (qf, , S) → (qf, 0BB)
iii. (qf, , B) →{(qf, 0S), (qf, 1S), (qf, 0)}
iv. (qf, 0, 0) → (qf, )
v. (qf, 1, 1) → (qf, ) For the given string w = 010000, we can process it using the
defined transition rules.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 18
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 19
Many types of Turing machines are used to define complexity classes, such as deterministic
Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum
Turing machines, symmetric Turing machines and alternating Turing machines. They are all
equally powerful in principle, but when resources (such as time or space) are bounded, some of
these may be more powerful than others.
Definition
Turing machine can be thought as a finite state automata
connected to an R/W (read/write) head. It has one tape
which is divided into a number of squares or cells. The
block diagram of the basic model for Turing Machine is
given as shown in figure:
The machine consists of a finite control that can be
in any of a finite set of states. The tape is divided
into numbers of cells each can holds any one of a
finite number of symbols.
Initially, the input, which is a finite length string of
symbols chosen from the input alphabet, is placed
on the tape. All other tape cells, extending infinitely to the left & right, initially holds the
special symbols called the blank (#). The blank is a tape symbol but not the input symbol.
There is a tape head that is always positioned at one of the tape cell where the Turing
Machine is said to be scanning that cell. Initially, the tape head is at the leftmost cell that
holds the input string.
Mathematically, the Turing Machine, M = (Q, , , , q0, #, F) can be described by the 7-touple,
where
Q = Finite non-empty set of states
= Finite non-empty set of input alphabet or symbols
= Complete set of tape symbols ( + #*)
q0 = Initial states or start states (q0 Q)
# = Blank symbol (# )
F = Subset of Q (i.e. F Q) consisting of final state (i.e. one or more accepting states).
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 1
Here, the present symbol under R/W head is ‘b’ & the present state is ‘q3’. Thus, ‘b’ must be
written to the right of ‘q3’. The non-blank symbol
to the left of ‘b’ is ‘aba’ that must be written to the
left of ‘q3’. The sequence of non-blank symbols to
the right of ‘b’ is ‘abba’. Hence, the ID can be
shown in the following figure. The (q3, b) induces
a change in ID of the TM. We call this change in
ID a move.
Suppose, (q, xi) = (p, y, L) is a move of TM, where the input string can be represented as
x1, x2 … xn. Here, the present symbol under R/W head is ‘xi’.
Thus,
ID before processing
x1, x2 … xi-1 q xi, xi+1 … xn
ID after processing
x1, x2 … xi-2, p xi-1 y, xi+1 … xn
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 2
Let us consider the Turing machine M = (Q, , , , q0, #, F). A string ‘w’ in * is said
to be accepted by M if q0w-* 1p2 for some pF and 1, 2 *. Here, ‘p’ is any
final state determined after reading the whole input string. The Turing machine ‘M’ does
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 3
For example
Let a Turing Machine, M = (Q, , , , q0, #, F) can be described by the 7-touple,
where
Q = Set of states = {q0, q1}
= Set of input alphabet or symbols = {0, 1}
= Set of tape symbols ( + #*) = {0, 1, #}
q0 = Initial states or start states (q0 Q)
# = Blank symbol (# )
q1 = Final state
= Transition function, which define by the given transition diagram.
Now, check whether M accept the given string (w) = 00011 or not.
Solution
Here, transition function can be defined as:
(q0, 0) (q0, 0, R)
(q0, 1) (q1, 1, R)
(q1, 0) (q1, 0, R)
(q1, 1) (q1, 1, R)
Since the TM reach to the final state after processing all input string so we say that it
is accepted by TM.
Solution
Let the required Turing machine, M = (Q, , , , q0, #, F), where
Q = Set of states = {q0, h}, where h = halting state
= Set of input alphabet or symbols = {a, b}
= Set of tape symbols ( + #*) = {a, b, #}
q0 = Initial states or start states (q0 Q)
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 4
Now, let any string (w) = abbbaa, which can be process as:
q0 abbbaa #q0bbbaa ##0q0bbaa ###q0baa ####q1aa #####q1a
######q1# ######h
Since the TM reach to the defined halting state Present Tape symbol
after processing all input string so we say that state a b
it is accepted by TM. Hence, the defined TM q0 (q1, #, R) (q1, #, R)
is capable for handling our problem. *q0 (q1, #, R) (q1, #, R)
Note: This problem also can be treated by defining the final state rather than the
halting state as:
Example-2: Design a TM that recognizes the language of all the string of even length
over the alphabet {a, b}.
Solution
Let the required Turing machine, M = (Q, , , , q0, #, F), where
Q = Set of states = {q0, q1, h}, where h = halting state
= Set of input alphabet or symbols = {a, b}
= Set of tape symbols ( + #*) = {a, b, #}
q0 = Initial states or start states (q0 Q)
# = Blank symbol (# )
= Transition function, which defines as:
(q0, a) (q1, a, R) Present Tape symbol
(q0, b) (q1, b, R) state a b #
(q1, a) (q0, a, R) q0 (q1, a, R) (q1, b, (h, #, N)
(q1, b) (q0, b, R) R)
(q0, #) (h, #, N) q1 (q0, a, R) (q0, b,
(h, #) Accepting state R)
h Accepting
state
Now, let any string (w) = abbbaa, which can be process as:
q0 abbbaa aq1 bbbaa abq0 bbaa abbq1baa abbbq0aa abbbaq1a
abbbaaq0# ######h
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 5
Let 0, 1 be any alphabets not containing the blank symbol #. Also, let ‘f’ be a function
from 0 * to 1*. A Turing machine ‘M’ is said to compute ‘f’ if 0, 1 and for any
string w0 * if f (w) = u then
If some such Turing machine ‘M’ exists, then the function ‘f’ is said to be a Turing
computable function.
Note:
a. Here, the TM must be halt when it find the resulting functional value during
processing from the start state‘s’.
b. is the input symbol for the defined TM.
c. is proper subset which indicates subset or same set
d. U is any resulting value of function ‘f’
e. Function ‘f’ from 0 * to 1* shows that the both set of input symbols are fall on
the same domain i.e. real number to real number etc.
f. The # shows the current position of the reading head.
Example-1: Design a Turing machine which compute the function f (n) = n+1, for
nN (natural number).
Solution
Here, we design a Turing machine ‘M’ which computes any function ‘f’ by writing an
‘I’ in the tape square in which its head is initially located, moving its head one square
right and halting.
Formally, let the required Turing machine, M = (Q, , , , q0, #, F), where
Q = Set of states = {q0, h}, where h = halting state
= Set of input alphabet or symbols = {I}
= Set of tape symbols ( + #*) = {I, #}
q0 = Initial states or start states (q0 Q)
# = Blank symbol (# )
= Transition function, which defines as:
(q0, #) (h, I, R)
OR Present Tape symbol
(q0, #) (q0, I, N) state I #
(q0, I) (h, #, R) q0 (h, #, R) (q0, I, N)
(h, #) Accepting state h Accepting state
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 6
Hence, in similar manner we can conclude that Hence, in similar manner we can conclude that
For n=In For n=0
(q0, # In##) (h, # In+1#) i.e. Accepted (q0, In##) (q0, InI#) or (h, In+1#) i.e. Accepted
Example-2: Design a Turing machine which compute the function f (n) = n+2, for
nN (natural number).
Solution
Here, we design a Turing machine ‘M’ which computes any function ‘f’ by writing an
‘I’ in the tape square in which its head is initially located, moving its head one square
right and halting.
Formally, let the required Turing machine, M = (Q, , , , q0, #, F), where
Q = Set of states = {q0, h}, where h = halting state
= Set of input alphabet or symbols = {I}
= Set of tape symbols ( + #*) = {I, #}
q0 = Initial states or start states (q0 Q)
# = Blank symbol (# ) Present Tape symbol
= Transition function, which defines as: state I #
(q0, #) (q1, I, R) q 0 (q 1 , I, R)
(q1, #) (h, I, R) q 1 (h, I, R)
(h, #) Accepting state h Accepting state
Here,
For n=0
(q0, ###) (q1, I##) (h, II#) i.e. Accepted
For n=1
(q0, I###) (q1, II##) (h, III#) i.e. Accepted
For n=2
(q0, II####) (q1, III##) (h, IIII#) i.e. Accepted
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 7
The transition function (q0, a) for the DTM is defined for some elements of set of states
(Q) x Set of tape symbols ().
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 8
A. Decision problem
In computability theory and computational complexity theory, a decision problem is a
question in some formal system with a yes-or-no answer, depending on the values of some
input parameters. For example, the problem "given two numbers x and y, does x evenly
divide y?" is a decision problem. The answer can be either 'yes' or 'no', and depends upon the
values of x and y.
B. Function problem
In computational complexity theory, a function problem is a computational problem where a
single output (of a total function) is expected for every input, but the output is more complex
than that of a decision problem, that is, it isn't just YES or NO. Notable examples include the
travelling salesman problem, which asks for the route taken by the salesman, and the integer
factorization problem, which asks for the list of factors.
Function problems can be sorted into complexity classes in the same way as decision
problems. For example FP is the set of function problems which can be solved by a
deterministic Turing machine in polynomial time, and FNP is the set of function problems
which can be solved by a non-deterministic Turing machine in polynomial time.
C. Search problem
In computational complexity theory and computability theory, a search problem is a type of
computational problem represented by a binary relation. We need the search problem to
select multiple solution from the given large data items but in case of function problem we
only select single result of data.
For instance, such problems occurs very frequently in graph theory, where searching graphs
for structures such as particular matching, cliques, independent set, etc. are subjects of
interest.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 9
A TM of this type corresponds to the notation of an algorithm, a well defined sequence of stapes
that always finishes and produces an answer. If we think of the language ‘L’ as a problem, then
the problem ‘L’ is called decidable (if it is a recursive language) and un-decidable (if it is not a
recursive language).
A. Partial function
A partial function f from X to Y is a rule which assigns to every element of X almost one
element of Y. For example, if R denotes the set of all real numbers, then f(r) = +r is a partial
function since f(r) is not defined as a real number when r is negative.
B. Total function
A total function f from X to Y is a rule which assigns to every element of X, a unique
element of Y. For example, if R denotes the set of all real numbers, then f(r) = 2r is a total
function since f(r) is defined for both positive and negative.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 10
5.1 Introduction
Complexity theory considers not only whether a problem can be solved at all on a computer (as
computability theory), but also how efficiently the problem can be solved. Thus, the complexity
theory classifies problems according to their degree of “difficulty”. Two major aspects are
considered: time complexity and space complexity, which are respectively how many steps does
it take to perform a computation, and how much memory is required to perform that
computation. Although the time and space complexity are dependent of input problem but it is
more complexity issue, so computer scientists have adopted Big O notation that compare the
asymptotic behavior of large problems.
Complexity classes
Complexity class Model of computation Resource constraint
P Deterministic Turing machine Time poly(n)
EXPTIME Deterministic Turing machine Time 2poly(n)
NP Non-deterministic Turing machine Time poly(n)
NEXPTIME Non-deterministic Turing machine Time 2poly(n)
L Deterministic Turing machine Space O(log n)
PSPACE Deterministic Turing machine Space poly(n)
EXPSPACE Deterministic Turing machine Space 2poly(n)
NL Non-deterministic Turing machine Space O(log n)
NPSPACE Non-deterministic Turing machine Space poly(n)
NEXPSPACE Non-deterministic Turing machine Space 2poly(n)
P versus NP problem
The complexity class P is often seen as a mathematical
abstraction modeling those computational tasks that admit an
efficient algorithm.
The complexity class NP, on the other hand, contains many
problems that people would like to solve efficiently, but for
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 1
Informally the class P is the class of decision problems solvable by some algorithm within a
number of steps bounded by some fixed polynomial in the length of the input. Turing was not
concerned with the efficiency of his machines, but rather his concern was whether they can
simulate arbitrary algorithms given sufficient time. However it turns out Turing machines can
generally simulate more efficient computer models (for example machines equipped with many
tapes or an unbounded random access memory) by at most squaring or cubing the computation
time. Thus P is a robust class, and has equivalent definitions over a large class of computer
models.
NP-completeness
NP-complete is a subset of NP, the set of all decision problems whose solutions can be verified in
polynomial time; NP may be equivalently defined as the set of decision problems that can be
solved in polynomial time on a nondeterministic Turing machine.
NP-hard problems
Some problem that proves the condition – (2) of
NP-completeness problem but not the condition –
(1) is called the NP-hard problem. They are
intractable problems.
Theory of Computation - Compiled by Yagya Raj Pandeya, NAST, Dhangadhi ©[email protected] Page 2