ob_fcd4f4_computational-complexity-and-computability-secure
ob_fcd4f4_computational-complexity-and-computability-secure
Learning objectives
After studying this chapter, student should be able to:
Explain the concept of computational theory.
Define Turing machine and explain its functioning.
Defines classes of problems (P, NP, NP hard, NP complete …).
Calculate and express the time efficiency of an algorithm in term of O(n) notation
Contents
I. COMPUTATIONAL THEORY ..................................................................................................... 2
I.1 Notion of Turing machine ............................................................................................................. 2
I.2 Notion of computable function ...................................................................................................... 3
I.3 The Church–Turing thesis ............................................................................................................. 3
I.4 Notion of decidable problem ......................................................................................................... 3
I.5 Halting problem ............................................................................................................................. 4
I.6 Computational complexity ............................................................................................................ 4
I.7 Classes of problems ....................................................................................................................... 4
II. EFFICIENCY ANALYSIS OF AN ALGORITHM ....................................................................... 5
II.1 What effects the efficiency of an algorithm? ............................................................................... 5
II.2 Time for an algorithm to run t(n) ................................................................................................. 6
II.3 Big-O Notation ............................................................................................................................. 6
II.4 Algorithm Analysis: Loops .......................................................................................................... 7
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 2 By DZEUGANG Placide
I. COMPUTATIONAL THEORY
Computability theory deals primarily with the question of the extent to which a problem is
solvable on a computer.
A Turing machine is a theoretical machine that is used in thought experiments to examine the
abilities and limitations of computers. In essence, a Turing machine is imagined to be a
simple computer that reads and writes symbols one at a time on an endless tape by strictly
following a set of rules. It determines what action it should perform next according to its
internal "state" and what symbol it currently sees. An example of one of a Turing Machine's
rules might be: "If you are in state 2 and you see an 'A', change it to 'B' and move left."
The "Turing" machine was described by Alan Turing in 1937, who called it an "a(utomatic)-
machine". Turing machines are not intended as a practical computing technology, but rather
as a thought experiment representing a computing machine.
In a deterministic Turing machine, the set of rules prescribes at most one action to be
performed for any given situation. A non-deterministic Turing machine (NTM), by contrast,
may have a set of rules that prescribes more than one action for a given situation. For
example, a non-deterministic Turing machine may have both "If you are in state 2 and you
see an 'A', change it to a 'B' and move left" and "If you are in state 2 and you see an 'A',
change it to a 'C' and move right" in its rule set.
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 3 By DZEUGANG Placide
Any function whose value may be computer using a Turing machine is a computable
function. The basic characteristic of a computable function is that there must be a finite
procedure (an algorithm) telling how to compute the function.
Enderton goes on to list several clarifications of these 3 requirements of the procedure for a
computable function:
2. No time limitation is assumed. The procedure is required to halt after finitely many
steps in order to produce an output, but it may take arbitrarily many steps before
halting.
3. No space limitation is assumed. Although the procedure may use only a finite amount
of storage space during a successful computation, there is no bound on the amount of
space that is used.
A function is said to be calculable if its values can be found by some purely mechanical
process.
The Church–Turing thesis states that any function computable from a procedure possessing
the three properties listed above is a computable function. Because these three properties are
not formally stated, the Church–Turing thesis cannot be proved. The following facts are often
taken as evidence for the thesis:
Many equivalent models of computation are known, and they all give the same
definition of computable function (or a weaker version, in some instances).
The Church–Turing thesis is sometimes used in proofs to justify that a particular function is
computable by giving a concrete description of a procedure for the computation.
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 4 By DZEUGANG Placide
to determine the existence of some object or its membership in a set; some of the most
important problems in mathematics are undecidable.
One of well known unsolvable problems is the halting problem. It asks the following
question: Given an arbitrary Turing machine M over alphabet = { a , b } , and an arbitrary
string w over , does M halt when it is given w as an input ?
It can be shown that the halting problem is not decidable, hence unsolvable.
The statement that the halting problem cannot be solved by a Turing machine is one of the
most important results in computability theory, as it is an example of a concrete problem that
is both easy to formulate and impossible to solve using a Turing machine. Much of
computability theory builds on the halting problem result.
The complexity theory is the set of concepts that attempts to explain complex phenomenon
not explainable by traditional (mechanistic) theories.
The class of polynomially solvable problems, P-problem (whose solution time is bounded by
a polynomial) is always also NP. If a problem is known to be NP, and a solution to the
problem is somehow known, then demonstrating the correctness of the solution can always be
reduced to a single P (polynomial time) verification. If P and NP are not equivalent, then the
solution of NP-problems requires (in the worst case) an exhaustive search.
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 5 By DZEUGANG Placide
A problem is said to be NP-hard if an algorithm for solving it can be translated into one for
solving any other NP-problem. It is much easier to show that a problem is NP than to show
that it is NP-hard. A problem which is both NP and NP-hard is called an NP-complete
problem. NP-hard problems may be of any type: decision problems, search problems, or
optimization problems.
Fig: Euler diagram for P, NP, NP-complete, and NP-hard set of problems
Two or more algorithms that solve the same problem can be very different and still satisfy
these two criteria. Therefore, the next step is to determine which algorithm is "best."
The analysis of algorithms is the area of computer science that provides tools for contrasting
the efficiency of different methods of solution.
There are generally two criteria used to determine whether one algorithm is "better" than
another.
Space requirements (i.e. how much memory is needed to complete the task).
Time requirements (i.e. how much time will it take to complete the task).
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 6 By DZEUGANG Placide
A third criteria that could be considered is the cost of human time. That is, the time to
develop and maintain the program.
We will attempt to characterise this by the size of the input. We will try and estimate the
WORST CASE, and sometimes the BEST CASE, and very rarely the AVERAGE CASE.
Worst Case is the maximum run time, over all inputs of size n, ignoring effects (a)
through (d) above. That is, we only consider the "number of times the principle
activity of that algorithm is performed".
Best Case: In this case we look at specific instances of input of size n. For example,
we might get best behaviour from a sorting algorithm if the input to it is already
sorted.
Average Case: Arguably, average case is the most useful measure but the most
difficult to measure.
What do we measure?
In analysing an algorithm, rather than a piece of code, we will try and predict the number of
times "the principle activity" of that algorithm is performed. For example, if we are analysing
a sorting algorithm we might count the number of comparisons performed, and if it is an
algorithm to find some optimal solution, the number of times it evaluates a solution. If it is a
graph colouring algorithm we might count the number of times we check that a coloured
node is compatible with its neighbours.
Suppose the worst case time for algorithm A is t(n) = n4 + 3n2 + 10 for input of size n. The
idea is to reduce the formula so that it captures the qualitative behaviour in simplest possible
terms. We eliminate any term whose contribution to the total ceases to be significant as n
becomes large. We also eliminate any constant factors, as these have no effect on the overall
pattern as n increases. Thus we may approximate f(n) above as
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 7 By DZEUGANG Placide
Definition: f(n) is O(g(n)) if there exist positive numbers c and N such that f(n) < = c g(n)
for all n >=N. i.e. f is big –O of g if there is c such that f is not larger than cg for
sufficiently large value of n ( greater than N)
c g(n) is an upper bound on the value of f(n). That is, the number of operations is at worst
proportional to g(n) for all large values of n.
Categorizing Performance
Consider an n x n two dimensional array. Write a loop to store the row sums in a one-
dimensional array rows and the overall total in grandTotal.
LOOP 1:
grandTotal = 0;
for (k=0; k<n-1; ++k)
{
rows[k] = 0;
for (j = 0; j <n-1; ++j)
{
rows[k] = rows[k] + matrix[k][j];
grandTotal = grandTotal + matrix[k][j];
}
}
LOOP 2:
grandTotal =0;
for (k=0; k<n-1; ++k)
{
rows[k] = 0;
for (j = 0; j <n-1; ++j)
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 8 By DZEUGANG Placide
Example 1:
Use big-O notation to analyze the time efficiency of the following fragment of C code:
Since these loops are nested, the efficiency is n3/2, or O(n3) in big-O terms.
Thus, for two loops with O[f1(n)] and O[f2(n)] efficiencies, the efficiency of the nesting of
these two loops is O[f1(n) * f2(n)].
Example 2:
Use big-O notation to analyze the time efficiency of the following fragment of C code:
The number of operations executed by these loops is the sum of the individual loop
efficiencies. Hence, the efficiency is n/2+n2, or O(n2) in big-O terms.
Thus, for two loops with O[f1(n)] and O[f2(n)] efficiencies, the efficiency of the sequencing
of these two loops is O[fD(n)] where fD(n) is the dominant of the functions f1(n) and f2(n).
This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format