0% found this document useful (0 votes)
19 views11 pages

Last Unit Motes

The document discusses recursive and recursively enumerable languages. A recursive language is one where a Turing machine halts on all inputs. A recursively enumerable language is one where a Turing machine may halt or loop forever on inputs not in the language, but halts on inputs that are in the language. Recursive languages are also recursively enumerable. The union, intersection, and complement of recursively enumerable languages are also recursively enumerable. The document then discusses context-sensitive grammars and the Chomsky hierarchy of formal languages.

Uploaded by

21egjcs151
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views11 pages

Last Unit Motes

The document discusses recursive and recursively enumerable languages. A recursive language is one where a Turing machine halts on all inputs. A recursively enumerable language is one where a Turing machine may halt or loop forever on inputs not in the language, but halts on inputs that are in the language. Recursive languages are also recursively enumerable. The union, intersection, and complement of recursively enumerable languages are also recursively enumerable. The document then discusses context-sensitive grammars and the Chomsky hierarchy of formal languages.

Uploaded by

21egjcs151
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Recursive Language

A language L is recursive (decidable) if L is the set of strings accepted by some Turing Machine
(TM) that halts on every input.
Example
When a Turing machine reaches a final state, it halts. We can also say that a Turing machine M
halts when M reaches a state q and a current symbol ‘a’ to be scanned so that δ(q, a) is undefined.
There are TMs that never halt on some inputs in any one of these ways, So we make a distinction
between the languages accepted by a TM that halts on all input strings and a TM that never halts
on some input strings.
Recursive Enumerable Language
A language L is recursively enumerable if L is the set of strings accepted by some TM.
If L is a recursive enumerable language then −
If w ∈ L then a TM halts in a final state,
If w ∉ L then a TM halts in a non-final state or loops forever.
If L is a recursive language then −
If w ∈ L then a TM halts in a final state,
If w ∉ L then TM halts in a non-final state.
Recursive Languages are also recursive enumerable
Proof − If L is a recursive then there is TM which decides a member in language then −
 M accepts x if x is in language L.
 M rejects on x if x is not in language L.
According to the definition, M can recognize the strings in language that are accepted on those
strings.

Properties of Recursively enumerable languages

1. Union
2. Intersection
3. Complement

Union of RE languages
Let’s revise union of sets;
Set 1 = {a, b, c}
Set 2 = {b, c, d}
Set 1 Union Set 2 = {a, b, c, d}
Now let’s understand the same concept in Turing Machine;
Suppose a system has 2 Turing Machines, TM1, and TM2.

 If TM1 halts then all the system halts.


 If TM1 crash then system checks that TM2 is ready to halt or not? If TM2 halts then
system halts because this is union and the union means that
o If TM1 halts then system halts
o If TM1 does not halt, and TM2 halts then system halts
o If TM1 and TM2 or TMn halts then system halts

The intersection of RE languages


Let’s revise the intersection of sets;
Set 1={a, b, c}
Set 2={b, c, d}
Set 1 Intersection Set 2 = {b, c}

Now let’s understand the same concept in Turing Machine;


Suppose a system has 2 Turing Machines, TM1, and TM2.

 If TM1 crash then all the system crash.


 If TM1 halts then system checks that TM2 is ready to halt or not? After this, If TM2 halts
then system halts because this is intersection and the intersection means that
o If TM1 crash then system crash
o If TM1 halts then check TM2 or TMn, and if TM2 is also halted, the system halts.
o If TM1 and TM2 or TMn crash then the system crash

The complement of RE languages


Suppose a system has 2 Turing Machines, TM1, and TM2.

 If TM1 crash then all the system crash.


 If TM1 halts then system check TM2 or TMn. If TM1 halts and TM2 also halts then system crash.
 If TM1 halts then system check TM2 or TMn. If TM1 halts and TM2 crash then system halts.
Context Sensitive Grammar
The Context Sensitive Grammar is formal grammar in which the left-hand sides and right-hand
sides of any production rules may be surrounded by a context of terminal and non-
terminal grammar. It is less general than Unrestricted Grammar and more general than Context
Free Grammar.
The context-sensitive grammar was introduced by Noam Chomsky in 1950.

Formal definition of Context-Sensitive Grammar


The formal definition of Context Sensitive Grammar is as follows. It is 4 tuple-

G = (N,Σ,P,S)
N - It is a set of non-terminal symbols,
Σ - It is a set of terminal symbols,
P - It is a set of production rules,
S - It is a start symbol of the production.

It is context sensitive if all the rules in production are in the form -

αAβ → αγβ

where,

A ∈ N,
α,β ∈ (N∪Σ)*,
γ ∈ (N∪Σ)+

Context Sensitive Language


The language generated by the context-sensitive grammar is called Context Sensitive
Language. Context Sensitive Language has the following properties -
 Union, intersection and concatenation of two context-sensitive languages is context-
sensitive.
 The Kleene plus of a context-sensitive language is context-sensitive.
 Every context-sensitive language is recursive.
 Complement of a context-sensitive language is context-sensitive.
The Chomsky hierarchy

1. Type 0 is known as unrestricted grammar.


2. Type 1 is known as context-sensitive grammar.
3. Type 2 is known as a context-free grammar.
4. Type 3 Regular Grammar.
Tractable And Untractable Problem

Tractable Problem: A problem that is solvable by a polynomial-time algorithm.


The upper bound is polynomial.
Here are examples of tractable problems (ones with known polynomial-time
algorithms):
– Searching an unordered list
– Searching an ordered list
– Sorting a list
– Multiplication of integers (even though there’s a gap)
– Finding a minimum spanning tree in a graph (even though there’s a gap)

Untractable Problem: a problem that cannot be solved by a polynomial-time


algorithm. The lower bound is exponential.
From a computational complexity stance, intractable problems are problems for which
there exist no efficient algorithms to solve them.
Most intractable problems have an algorithm that provides a solution, and that
algorithm is the brute-force search.
This algorithm, however, does not provide an efficient solution and is, therefore, not
feasible for computation with anything more than the smallest input.

Examples
Towers of Hanoi: we can prove that any algorithm that solves this problem must have a
worst-case running time that is at least 2n − 1.
* List all permutations (all possible orderings) of n numbers.

P Class
The P in the P class stands for Polynomial Time. It is the collection of decision
problems(problems with a “yes” or “no” answer) that can be solved by a deterministic machine
in polynomial time.
Features:
1. The solution to P problems is easy to find.
2. P is often a class of computational problems that are solvable and tractable. Tractable
means that the problems can be solved in theory as well as in practice. But the problems
that can be solved in theory but not in practice are known as intractable.
NP Class
The NP in NP class stands for Non-deterministic Polynomial Time. It is the collection of
decision problems that can be solved by a non-deterministic machine in polynomial time.
Features:
1. The solutions of the NP class are hard to find since they are being solved by a non-
deterministic machine but the solutions are easy to verify.
2. Problems of NP can be verified by a Turing machine in polynomial time.

NP-hard class
An NP-hard problem is at least as hard as the hardest problem in NP and it is the class of the
problems such that every problem in NP reduces to NP-hard.
Features:
1. All NP-hard problems are not in NP.
2. It takes a long time to check them. This means if a solution for an NP-hard problem is
given then it takes a long time to check whether it is right or not.
3. A problem A is in NP-hard if, for every problem L in NP, there exists a polynomial-time
reduction from L to A.

NP-complete class
A problem is NP-complete if it is both NP and NP-hard. NP-complete problems are the hardest
problems in NP.
Features:
1. NP-complete problems are special as any problem in NP class can be transformed or
reduced into NP-complete problems in polynomial time.
2. If one could solve an NP-complete problem in polynomial time, then one could also solve
any NP problem in polynomial time.

Undecidable Problems –
The problems for which we can’t construct an algorithm that can answer the problem correctly in
finite time are termed as Undecidable Problems. These problems may be partially decidable but
they will never be decidable. That is there will always be a condition that will lead the Turing
Machine into an infinite loop without providing an answer at all.

a popular Undecidable Problem which states that no three positive integers a, b


and c for any n>2 can ever satisfy the equation: a^n + b^n = c^n.
If we feed this problem to a Turing machine to find such a solution which gives a contradiction
then a Turing Machine might run forever, to find the suitable values of n, a, b and c. But we are
always unsure whether a contradiction exists or not and hence we term this problem as
an Undecidable Problem.

Vertex Cover
A Vertex Cover of a graph G is a set of vertices such that each edge in G is incident to at least one
of these vertices.

The decision vertex-cover problem was proven NPC. Now, we want to solve the optimal version
of the vertex cover problem, i.e., we want to find a minimum size vertex cover of a given graph.
We call such vertex cover an optimal vertex cover C*.

The idea is to take an edge (u, v) one by one, put both vertices to C, and remove all the edges
incident to u or v. We carry on until all edges have been removed. C is a VC. But how good is C?

Vertex Cover = {b, c, d, e, f, g}


Traveling-salesman Problem
In the traveling salesman Problem, a salesman must visits n cities. We can say that salesman wishes
to make a tour or Hamiltonian cycle, visiting each city exactly once and finishing at the city he
starts from. There is a non-negative cost c (i, j) to travel from the city i to city j. The goal is to find
a tour of minimum cost. We assume that every two cities are connected. Such problems are called
Traveling-salesman problem (TSP).

We can model the cities as a complete graph of n vertices, where each vertex represents a city.

It can be shown that TSP is NPC.

If we assume the cost function c satisfies the triangle inequality, then we can use the following
approximate algorithm.

Hamiltonian Path Problems

Given a graph G = (V, E) we have to find the Hamiltonian Circuit using Backtracking approach.
We start our search from any arbitrary vertex say 'a.' This vertex 'a' becomes the root of our implicit
tree. The first element of our partial solution is the first intermediate vertex of the Hamiltonian
Cycle that is to be constructed. The next adjacent vertex is selected by alphabetical order. If at any
stage any arbitrary vertex makes a cycle with any vertex other than vertex 'a' then we say that dead
end is reached. In this case, we backtrack one step, and again the search begins by selecting another
vertex and backtrack the element from the partial; solution must be removed. The search using
backtracking is successful if a Hamiltonian Cycle is obtained.

Example: Consider a graph G = (V, E) shown in fig. we have to find a Hamiltonian circuit using
Backtracking method.
Solution: Firstly, we start our search with vertex 'a.' this vertex 'a' becomes the root of our
implicit tree.

Next, we choose vertex 'b' adjacent to 'a' as it comes first in lexicographical order (b, c, d).

Next, we select 'c' adjacent to 'b.'

Next, we select 'd' adjacent to 'c.'


Next, we select 'e' adjacent to 'd.'
Next, we select vertex 'f' adjacent to 'e.' The vertex adjacent to 'f' is d and e, but they have
already visited. Thus, we get the dead end, and we backtrack one step and remove the
vertex 'f' from partial solution.

You might also like