Unit-5 Undecidability-ToC
Unit-5 Undecidability-ToC
Unit-5 Undecidability-ToC
Unsolvable Problems:
Unsolvable problems, also known as undecidable problems, are computational problems for
which there is no algorithmic solution. In other words, there is no algorithm that can solve
One example of an unsolvable problem is the Halting Problem, which asks whether a given
program will eventually halt when run on a particular input. It has been proven that there is
no algorithm that can solve the Halting Problem for all possible programs and inputs.
The concept of unsolvability was first introduced by the mathematician Kurt Gödel in the
1930s, as part of his work on the foundations of mathematics. Gödel showed that certain
mathematical statements are undecidable, meaning that there is no proof or disproof of the
The concept of undecidability has since been extended to the study of computation and
algorithms, and has led to a deeper understanding of the limits of what can be computed
and what cannot. It has also led to the development of alternative approaches to solving
theoretical computer science and mathematics, and has applications in fields such as
Computable Functions:
In the context of undecidability, computable functions refer to functions that can be
computed by a Turing machine. In other words, these are functions that can be computed by
an algorithm.
that can be formulated as a yes/no question, and for which we seek an algorithmic solution.
In other words, a decision problem is a problem for which we want to determine whether a
machine that can compute it. This means that there exists an algorithm that takes an input
itself, and there exists a base case that can be computed without reference to the function
itself.
multiplication, and division of integers. However, there are also many examples of functions
that are not computable, such as the halting problem, which asks whether a given Turing
problem, which means that there is no algorithm that can solve it for all possible inputs.
because it allows us to formalize the notion of a decision problem, and it provides a way to
distinguish between problems that can be solved algorithmically and problems that cannot
be solved algorithmically.
A language is recursive if there exists an algorithm that can decide whether any
given string belongs to the language or not. In other words, a language is recursive if
it is possible to construct a Turing machine that can halt and output "yes" or "no" for
any input string. Recursive languages are also known as decidable languages.
All recursive languages are recursively enumerable, but not all recursively
enumerable languages are recursive. This is because a recursively enumerable
language can be defined by a Turing machine that generates all the strings in the
language, but may never halt if given a string that is not in the language. Therefore,
there may not be an algorithm that can decide whether a given string belongs to the
language or not.
One of the main properties of a recursive language is that it is decidable. This means that
there exists an algorithm or a Turing machine that can decide whether any given string
belongs to the language or not. In other words, a recursive language can be recognized in a
exists a Turing machine that can generate all the strings in the language, it may not be
possible to decide whether a given string belongs to the language or not. This is because the
Turing machine may never halt if given a string that is not in the language.
Another property of recursive languages is that they are closed under complementation,
intersection, and union. This means that if L1 and L2 are recursive languages, then their
Recursively enumerable languages, on the other hand, are not necessarily closed under
A further property of recursively enumerable languages is that they are closed under
concatenation, Kleene star, and homomorphism. This means that if L1 and L2 are recursively
enumerable languages, then their concatenation, Kleene star, and homomorphic image are
In summary, recursive languages are decidable and have certain closure properties, while
recursively enumerable languages are not necessarily decidable and have other closure
properties. These properties are important in the study of formal languages and their
Given a set of string pairs, can we find a sequence of indices that, when applied to
the strings in the corresponding pair, result in the same string? In other words, given
a set of pairs (s1, t1), (s2, t2), ..., (sn, tn), is there a sequence of indices i1, i2, ..., ik
such that s[i1]s[i2]...s[ik] = t[i1]t[i2]...t[ik]?
The PCP is known to be undecidable, meaning that there is no algorithm that can
solve the problem for all possible inputs. This was proven by showing that the
Halting Problem can be reduced to the PCP, meaning that if we had an algorithm to
solve the PCP, we could use it to solve the Halting Problem, which is known to be
undecidable.
Despite being undecidable, the PCP is an important problem in the study of formal
languages and automata theory, and has applications in cryptography, coding theory,
and DNA computing. The PCP is also a useful tool for proving the undecidability of
other problems, and has been used to show that several other decision problems are
undecidable.
Correspondence Problem (PCP), which was introduced by Shimon Even and Michael Shub in
1976. In the MPCP, each pair of strings has a weight associated with it, and the goal is to find
a sequence of indices that, when applied to the strings in the corresponding pair, result in
Formally, the input to the MPCP consists of a finite set of string-weight pairs {(s1,w1),
(s2,w2), ..., (sn,wn)}, where si and wi are strings and weights, respectively. The problem is to
find a sequence of indices i1, i2, ..., ik such that the total weight of the concatenated strings
The MPCP is also undecidable, meaning that there is no algorithm that can solve the
problem for all possible inputs. This can be shown by reduction from the PCP or by
The MPCP has applications in areas such as cryptography, coding theory, and DNA
computing. It has been used to prove the undecidability of other decision problems, and has
also been used as a model for various optimization problems in computer science and
mathematics.
Example Problems:
The idea of a UTM was first proposed by Alan Turing in 1936, as a way to demonstrate the
universality of the Turing machine model of computation. Turing proved that it is possible to
construct a single Turing machine that can simulate any other Turing machine, by encoding
the description of the machine and its input on the tape of the UTM, and then using a special
program on the UTM to simulate the behavior of the encoded machine.
The UTM is a powerful theoretical concept, because it shows that any computation that can
be performed by a computer program can also be performed by a Turing machine. This is
the basis for the Church-Turing thesis, which states that the Turing machine model of
computation is equivalent in power to any other model of computation that is physically
realizable.
The UTM has had a profound impact on computer science and mathematics, and has been
used to study the complexity of algorithms, the limits of computability, and the foundations
of artificial intelligence. Many programming languages and computer systems are designed
to be Turing complete, meaning that they can be used to simulate a UTM and carry out any
computation that can be performed by a Turing machine.
On the other hand, a UTM is a Turing machine that is capable of simulating any other
Turing machine. It is essentially a universal computer that can be programmed to
carry out any computation that a Turing machine can perform. To simulate another
Turing machine, the UTM reads the description of the machine and its input on its
tape, and then executes a program that simulates the behavior of the encoded
machine.
Another key difference between a Turing machine and a UTM is that a Turing
machine has a finite set of states, while a UTM has an infinite number of states,
since it must be able to simulate any possible Turing machine. The UTM can be
programmed to perform any computation that a Turing machine can perform, and is
therefore considered to be a more powerful and general model of computation than
a Turing machine.
Tractable problems, on the other hand, include many practical and important
problems, such as sorting and searching, that can be solved efficiently in polynomial
time. Polynomial-time algorithms have a running time that is proportional to a
polynomial function of the input size, and thus the time required to solve the problem
grows relatively slowly as the input size increases.
On the other hand, NP stands for "nondeterministic polynomial time," and consists of
decision problems that can be verified by a deterministic Turing machine in
polynomial time. In other words, if a solution to the problem is given, it can be
checked in a number of steps that grows no faster than a polynomial function of the
input size.
The most important question in complexity theory is whether P equals NP, which is
one of the seven Millennium Prize Problems. If P equals NP, then every problem that
can be checked in polynomial time can also be solved in polynomial time. In practical
terms, this would mean that many important optimization problems, such as the
traveling salesman problem and the knapsack problem, could be solved efficiently,
which would have a significant impact on fields such as cryptography,
bioinformatics, and artificial intelligence.
The algorithm works by first sorting the edges of the graph in non-decreasing order
of their weights. It then initializes an empty set of edges, and iteratively adds the
edges to the set, one at a time, in order of increasing weight, as long as adding the
edge does not create a cycle. This process continues until all the vertices are
connected, or until no more edges can be added without creating a cycle.
The correctness of Kruskal's algorithm can be proved by the cut property, which
states that if an edge is in the minimum spanning tree of a graph, then it must be a
light edge crossing some cut of the graph. The algorithm ensures that it always
selects the lightest edge that crosses any cut of the graph, and therefore the set of
edges it selects forms a minimum spanning tree.
Kruskal's algorithm has a time complexity of O(E log E), where E is the number of
edges in the graph. This is because the algorithm needs to sort the edges, which
takes O(E log E) time, and then perform up to E iterations, each of which involves
checking whether adding an edge would create a cycle, which can be done in
constant time using a union-find data structure.
Overall, Kruskal's algorithm is a simple and efficient algorithm for solving the
minimum spanning tree problem, which is an important problem in graph theory and
computer science.
problem is defined as follows: given a set of cities and the distances between them, find the
shortest possible route that visits each city exactly once and returns to the starting city.
The TSP is an optimization problem, which means that it involves finding the best solution
among a set of possible solutions. In this case, the set of possible solutions is the set of all
possible permutations of the cities. Therefore, the brute-force approach to solving the TSP
would involve checking all possible permutations, which would take an exponential amount
of time.
There are several algorithms that can be used to solve the TSP, including brute-force
these algorithms have a worst-case time complexity that grows exponentially with the size
of the problem.
One common approach to solving the TSP is to use a heuristic algorithm, such as the
nearest neighbour algorithm or the 2-opt algorithm. These algorithms do not guarantee an
optimal solution, but they can often find good solutions quickly for small to medium-sized
to find an exact solution in polynomial time. Nonetheless, researchers continue to study the
TSP and develop new algorithms and techniques for solving it efficiently. The TSP is an
important problem in operations research, computer science, and logistics, and has many
Important 2Marks: