Unit-5 Undecidability-ToC

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Unit-5 Undecidability:

Unsolvable Problems:

Unsolvable problems, also known as undecidable problems, are computational problems for

which there is no algorithmic solution. In other words, there is no algorithm that can solve

the problem for all possible inputs in a finite amount of time.

One example of an unsolvable problem is the Halting Problem, which asks whether a given

program will eventually halt when run on a particular input. It has been proven that there is

no algorithm that can solve the Halting Problem for all possible programs and inputs.

The concept of unsolvability was first introduced by the mathematician Kurt Gödel in the

1930s, as part of his work on the foundations of mathematics. Gödel showed that certain

mathematical statements are undecidable, meaning that there is no proof or disproof of the

statement that can be derived using a finite set of axioms.

The concept of undecidability has since been extended to the study of computation and

algorithms, and has led to a deeper understanding of the limits of what can be computed

and what cannot. It has also led to the development of alternative approaches to solving

computational problems, such as heuristics and approximation algorithms.

The study of unsolvable problems and undecidability is a fundamental area of research in

theoretical computer science and mathematics, and has applications in fields such as

cryptography, artificial intelligence, and software engineering.

Computable Functions:
In the context of undecidability, computable functions refer to functions that can be
computed by a Turing machine. In other words, these are functions that can be computed by
an algorithm.

The concept of computable functions is important in the study of undecidability because it

allows us to formalize the notion of a decision problem. A decision problem is a problem

that can be formulated as a yes/no question, and for which we seek an algorithmic solution.
In other words, a decision problem is a problem for which we want to determine whether a

certain property holds or not.

In the theory of computability, a function is said to be computable if there exists a Turing

machine that can compute it. This means that there exists an algorithm that takes an input

and produces the output of the function.

The concept of computable functions is closely related to the concept of recursive

functions. A function is said to be recursive if it can be defined using a finite number of

recursive equations. In other words, a function is recursive if it can be defined in terms of

itself, and there exists a base case that can be computed without reference to the function

itself.

There are many examples of computable functions, such as addition, subtraction,

multiplication, and division of integers. However, there are also many examples of functions

that are not computable, such as the halting problem, which asks whether a given Turing

machine halts on a given input. The halting problem is an example of an undecidable

problem, which means that there is no algorithm that can solve it for all possible inputs.

In summary, the concept of computable functions is central to the study of undecidability

because it allows us to formalize the notion of a decision problem, and it provides a way to

distinguish between problems that can be solved algorithmically and problems that cannot

be solved algorithmically.

Recursive and Recursively Enumerable Languages:


Recursive languages and recursively enumerable languages are two important
classes of languages in formal language theory.

A language is recursive if there exists an algorithm that can decide whether any
given string belongs to the language or not. In other words, a language is recursive if
it is possible to construct a Turing machine that can halt and output "yes" or "no" for
any input string. Recursive languages are also known as decidable languages.

On the other hand, a language is recursively enumerable if there exists a Turing


machine that can generate all the strings in the language in some order. In other
words, a language is recursively enumerable if it is possible to construct a Turing
machine that can halt and output all the strings in the language, but may never halt if
given a string that is not in the language.

All recursive languages are recursively enumerable, but not all recursively
enumerable languages are recursive. This is because a recursively enumerable
language can be defined by a Turing machine that generates all the strings in the
language, but may never halt if given a string that is not in the language. Therefore,
there may not be an algorithm that can decide whether a given string belongs to the
language or not.

The study of recursive and recursively enumerable languages is a fundamental area


of research in theoretical computer science and has applications in areas such as
programming language design, compiler construction, and natural language
processing.

Properties of Recursive and Recursively Enumerable Languages:


Recursive and recursively enumerable languages have several important properties that

distinguish them from each other.

One of the main properties of a recursive language is that it is decidable. This means that

there exists an algorithm or a Turing machine that can decide whether any given string

belongs to the language or not. In other words, a recursive language can be recognized in a

finite amount of time.

In contrast, a recursively enumerable language is not necessarily decidable. Although there

exists a Turing machine that can generate all the strings in the language, it may not be

possible to decide whether a given string belongs to the language or not. This is because the

Turing machine may never halt if given a string that is not in the language.

Another property of recursive languages is that they are closed under complementation,

intersection, and union. This means that if L1 and L2 are recursive languages, then their

complement, intersection, and union are also recursive languages.

Recursively enumerable languages, on the other hand, are not necessarily closed under

complementation, intersection, and union. For example, the complement of a recursively


enumerable language may not be recursively enumerable, and the intersection or union of

two recursively enumerable languages may not be recursively enumerable.

A further property of recursively enumerable languages is that they are closed under

concatenation, Kleene star, and homomorphism. This means that if L1 and L2 are recursively

enumerable languages, then their concatenation, Kleene star, and homomorphic image are

also recursively enumerable languages.

In summary, recursive languages are decidable and have certain closure properties, while

recursively enumerable languages are not necessarily decidable and have other closure

properties. These properties are important in the study of formal languages and their

applications in computer science.

Post’s Correspondence Problem (PCP):


Post's Correspondence Problem (PCP) is a classic decision problem in theoretical
computer science, named after Emil Post. The problem is defined as follows:

Given a set of string pairs, can we find a sequence of indices that, when applied to
the strings in the corresponding pair, result in the same string? In other words, given
a set of pairs (s1, t1), (s2, t2), ..., (sn, tn), is there a sequence of indices i1, i2, ..., ik
such that s[i1]s[i2]...s[ik] = t[i1]t[i2]...t[ik]?

The PCP is known to be undecidable, meaning that there is no algorithm that can
solve the problem for all possible inputs. This was proven by showing that the
Halting Problem can be reduced to the PCP, meaning that if we had an algorithm to
solve the PCP, we could use it to solve the Halting Problem, which is known to be
undecidable.

Despite being undecidable, the PCP is an important problem in the study of formal
languages and automata theory, and has applications in cryptography, coding theory,
and DNA computing. The PCP is also a useful tool for proving the undecidability of
other problems, and has been used to show that several other decision problems are
undecidable.

Modified Post’s Correspondence Problem (PCP):


The Modified Post's Correspondence Problem (MPCP) is a variation of the classic Post's

Correspondence Problem (PCP), which was introduced by Shimon Even and Michael Shub in

1976. In the MPCP, each pair of strings has a weight associated with it, and the goal is to find
a sequence of indices that, when applied to the strings in the corresponding pair, result in

two strings that have the same total weight.

Formally, the input to the MPCP consists of a finite set of string-weight pairs {(s1,w1),

(s2,w2), ..., (sn,wn)}, where si and wi are strings and weights, respectively. The problem is to

find a sequence of indices i1, i2, ..., ik such that the total weight of the concatenated strings

s[i1]s[i2]...s[ik] is equal to the total weight of the concatenated strings t[i1]t[i2]...t[ik].

The MPCP is also undecidable, meaning that there is no algorithm that can solve the

problem for all possible inputs. This can be shown by reduction from the PCP or by

constructing a variant of the Halting Problem.

The MPCP has applications in areas such as cryptography, coding theory, and DNA

computing. It has been used to prove the undecidability of other decision problems, and has

also been used as a model for various optimization problems in computer science and

mathematics.
Example Problems:

Universal Turing Machine (UTM):


A Universal Turing Machine (UTM) is a theoretical construct that can simulate any other
Turing machine, and can therefore be programmed to carry out any computation that a
Turing machine can perform. In other words, a UTM is a Turing machine that can take as
input the description of any other Turing machine, and simulate its behavior on any input.

The idea of a UTM was first proposed by Alan Turing in 1936, as a way to demonstrate the
universality of the Turing machine model of computation. Turing proved that it is possible to
construct a single Turing machine that can simulate any other Turing machine, by encoding
the description of the machine and its input on the tape of the UTM, and then using a special
program on the UTM to simulate the behavior of the encoded machine.

The UTM is a powerful theoretical concept, because it shows that any computation that can
be performed by a computer program can also be performed by a Turing machine. This is
the basis for the Church-Turing thesis, which states that the Turing machine model of
computation is equivalent in power to any other model of computation that is physically
realizable.

The UTM has had a profound impact on computer science and mathematics, and has been
used to study the complexity of algorithms, the limits of computability, and the foundations
of artificial intelligence. Many programming languages and computer systems are designed
to be Turing complete, meaning that they can be used to simulate a UTM and carry out any
computation that can be performed by a Turing machine.

Difference between Turing Machine and Universal Turing Machine:


The main difference between a Turing machine and a Universal Turing Machine
(UTM) is that a UTM is capable of simulating any other Turing machine, while a
Turing machine can only perform computations for a specific task that it is designed
to do.

A Turing machine is a theoretical model of computation that consists of a tape with


symbols on it, a head that can read and write symbols on the tape, and a set of rules
that dictate how the head moves and changes the symbols on the tape. A Turing
machine can perform computations for a specific problem by following the rules
defined for that problem.

On the other hand, a UTM is a Turing machine that is capable of simulating any other
Turing machine. It is essentially a universal computer that can be programmed to
carry out any computation that a Turing machine can perform. To simulate another
Turing machine, the UTM reads the description of the machine and its input on its
tape, and then executes a program that simulates the behavior of the encoded
machine.

Another key difference between a Turing machine and a UTM is that a Turing
machine has a finite set of states, while a UTM has an infinite number of states,
since it must be able to simulate any possible Turing machine. The UTM can be
programmed to perform any computation that a Turing machine can perform, and is
therefore considered to be a more powerful and general model of computation than
a Turing machine.

In summary, a Turing machine is a theoretical model of computation that can


perform computations for a specific problem, while a UTM is a more powerful
theoretical model of computation that can simulate any other Turing machine and
perform any computation that a Turing machine can perform.

Tractable and Intractable Problems:


In computer science, a problem is considered tractable if there exists an algorithm
that can solve it efficiently in polynomial time, meaning that the running time of the
algorithm grows no faster than a polynomial function of the input size. On the other
hand, a problem is considered intractable if no polynomial-time algorithm is known
for solving it.

The most well-known example of an intractable problem is the NP-complete problem


class, which contains many important optimization problems, such as the traveling
salesman problem and the knapsack problem. These problems are believed to
require exponential time to solve, and no polynomial-time algorithm is known for
them. However, they can be solved in polynomial time if a solution is already given,
and thus they are said to be in NP (nondeterministic polynomial time).

Tractable problems, on the other hand, include many practical and important
problems, such as sorting and searching, that can be solved efficiently in polynomial
time. Polynomial-time algorithms have a running time that is proportional to a
polynomial function of the input size, and thus the time required to solve the problem
grows relatively slowly as the input size increases.

The distinction between tractable and intractable problems is important in computer


science and optimization theory, as it helps researchers to identify which problems
are feasible to solve efficiently and which problems are likely to require significant
computational resources to solve. Many algorithms and data structures have been
developed to solve tractable problems efficiently, while intractable problems often
require the use of heuristic or approximation algorithms, which sacrifice optimality
for efficiency.
P and NP Class:
In computational complexity theory, P and NP are classes of decision problems. P
stands for "polynomial time," and consists of decision problems that can be solved
by a deterministic Turing machine in polynomial time. In other words, there exists an
algorithm that can solve the problem in a number of steps that grows no faster than
a polynomial function of the input size.

On the other hand, NP stands for "nondeterministic polynomial time," and consists of
decision problems that can be verified by a deterministic Turing machine in
polynomial time. In other words, if a solution to the problem is given, it can be
checked in a number of steps that grows no faster than a polynomial function of the
input size.

The most important question in complexity theory is whether P equals NP, which is
one of the seven Millennium Prize Problems. If P equals NP, then every problem that
can be checked in polynomial time can also be solved in polynomial time. In practical
terms, this would mean that many important optimization problems, such as the
traveling salesman problem and the knapsack problem, could be solved efficiently,
which would have a significant impact on fields such as cryptography,
bioinformatics, and artificial intelligence.

However, despite decades of research, no polynomial-time algorithm has been found


for any NP-complete problem, and it is widely believed that P is not equal to NP. This
implies that there are problems that are relatively easy to check but very hard to
solve, and that many optimization problems are likely to remain intractable.
Nonetheless, researchers continue to study P and NP, and have developed many
approximation and heuristic algorithms that can solve many practical problems
efficiently.

Kruskal’s Algorithm for P Class Problem:


Kruskal's algorithm is a greedy algorithm used to find the minimum spanning tree of
a weighted undirected graph. The problem of finding a minimum spanning tree is in
the class P, which means that it can be solved in polynomial time.

The algorithm works by first sorting the edges of the graph in non-decreasing order
of their weights. It then initializes an empty set of edges, and iteratively adds the
edges to the set, one at a time, in order of increasing weight, as long as adding the
edge does not create a cycle. This process continues until all the vertices are
connected, or until no more edges can be added without creating a cycle.

The correctness of Kruskal's algorithm can be proved by the cut property, which
states that if an edge is in the minimum spanning tree of a graph, then it must be a
light edge crossing some cut of the graph. The algorithm ensures that it always
selects the lightest edge that crosses any cut of the graph, and therefore the set of
edges it selects forms a minimum spanning tree.

Kruskal's algorithm has a time complexity of O(E log E), where E is the number of
edges in the graph. This is because the algorithm needs to sort the edges, which
takes O(E log E) time, and then perform up to E iterations, each of which involves
checking whether adding an edge would create a cycle, which can be done in
constant time using a union-find data structure.

Overall, Kruskal's algorithm is a simple and efficient algorithm for solving the
minimum spanning tree problem, which is an important problem in graph theory and
computer science.

Travelling Salesman Problem for NP Class Problem:


The Travelling Salesman Problem (TSP) is a classic example of an NP-hard problem. The

problem is defined as follows: given a set of cities and the distances between them, find the

shortest possible route that visits each city exactly once and returns to the starting city.

The TSP is an optimization problem, which means that it involves finding the best solution

among a set of possible solutions. In this case, the set of possible solutions is the set of all

possible permutations of the cities. Therefore, the brute-force approach to solving the TSP

would involve checking all possible permutations, which would take an exponential amount

of time.

There are several algorithms that can be used to solve the TSP, including brute-force

algorithms, dynamic programming algorithms, and heuristic algorithms. However, all of

these algorithms have a worst-case time complexity that grows exponentially with the size

of the problem.

One common approach to solving the TSP is to use a heuristic algorithm, such as the

nearest neighbour algorithm or the 2-opt algorithm. These algorithms do not guarantee an

optimal solution, but they can often find good solutions quickly for small to medium-sized

instances of the problem.


In general, the TSP is an NP-hard problem, which means that it is believed to be impossible

to find an exact solution in polynomial time. Nonetheless, researchers continue to study the

TSP and develop new algorithms and techniques for solving it efficiently. The TSP is an

important problem in operations research, computer science, and logistics, and has many

practical applications in areas such as transportation, manufacturing, and scheduling.

Important 2Marks:

You might also like