0% found this document useful (0 votes)
21 views

ob_fcd4f4_computational-complexity-and-computability-secure

Uploaded by

eschosysbifmet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

ob_fcd4f4_computational-complexity-and-computability-secure

Uploaded by

eschosysbifmet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Ministry of Secondary Education Republic of Cameroon

Progressive Comprehensive High School Peace – Work – Fatherland


PCHS Mankon – Bamenda School Year: 2013/2014
Department of Computer Studies

TOPIC: COMPUTATINAL COMPLEXITY


AND COMPUTABILITY
Class: Comp. Sc. A/L By: DZEUGANG PLACIDE

Learning objectives
After studying this chapter, student should be able to:
 Explain the concept of computational theory.
 Define Turing machine and explain its functioning.
 Defines classes of problems (P, NP, NP hard, NP complete …).
 Calculate and express the time efficiency of an algorithm in term of O(n) notation

Contents
I. COMPUTATIONAL THEORY ..................................................................................................... 2
I.1 Notion of Turing machine ............................................................................................................. 2
I.2 Notion of computable function ...................................................................................................... 3
I.3 The Church–Turing thesis ............................................................................................................. 3
I.4 Notion of decidable problem ......................................................................................................... 3
I.5 Halting problem ............................................................................................................................. 4
I.6 Computational complexity ............................................................................................................ 4
I.7 Classes of problems ....................................................................................................................... 4
II. EFFICIENCY ANALYSIS OF AN ALGORITHM ....................................................................... 5
II.1 What effects the efficiency of an algorithm? ............................................................................... 5
II.2 Time for an algorithm to run t(n) ................................................................................................. 6
II.3 Big-O Notation ............................................................................................................................. 6
II.4 Algorithm Analysis: Loops .......................................................................................................... 7

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 2 By DZEUGANG Placide

I. COMPUTATIONAL THEORY

Computability theory deals primarily with the question of the extent to which a problem is
solvable on a computer.

I.1 Notion of Turing machine

A Turing machine is a theoretical machine that is used in thought experiments to examine the
abilities and limitations of computers. In essence, a Turing machine is imagined to be a
simple computer that reads and writes symbols one at a time on an endless tape by strictly
following a set of rules. It determines what action it should perform next according to its
internal "state" and what symbol it currently sees. An example of one of a Turing Machine's
rules might be: "If you are in state 2 and you see an 'A', change it to 'B' and move left."

The "Turing" machine was described by Alan Turing in 1937, who called it an "a(utomatic)-
machine". Turing machines are not intended as a practical computing technology, but rather
as a thought experiment representing a computing machine.

TURING MACHINES are supposed extremely simple calculating devices. A Turning


machine remembers only one number, called its state. It moves back and forth along an
infinite tape, scanning and writing symbols and changing its state. Its action at a given step in
the calculation is based on only two factors: its current state number and the symbol that it
is currently scanning on the tape. It continues in this way until it enters a special state called
the halt state. In spite of their simplicity, Turing machines can perform any calculation that
can be performed by any computer.

Determinist and non determinist Turing machines

In a deterministic Turing machine, the set of rules prescribes at most one action to be
performed for any given situation. A non-deterministic Turing machine (NTM), by contrast,
may have a set of rules that prescribes more than one action for a given situation. For
example, a non-deterministic Turing machine may have both "If you are in state 2 and you
see an 'A', change it to a 'B' and move left" and "If you are in state 2 and you see an 'A',
change it to a 'C' and move right" in its rule set.

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 3 By DZEUGANG Placide

I.2 Notion of computable function

Any function whose value may be computer using a Turing machine is a computable
function. The basic characteristic of a computable function is that there must be a finite
procedure (an algorithm) telling how to compute the function.

Enderton goes on to list several clarifications of these 3 requirements of the procedure for a
computable function:

1. The procedure must theoretically work for arbitrarily large arguments.

2. No time limitation is assumed. The procedure is required to halt after finitely many
steps in order to produce an output, but it may take arbitrarily many steps before
halting.

3. No space limitation is assumed. Although the procedure may use only a finite amount
of storage space during a successful computation, there is no bound on the amount of
space that is used.

A function is said to be calculable if its values can be found by some purely mechanical
process.

I.3 The Church–Turing thesis

The Church–Turing thesis states that any function computable from a procedure possessing
the three properties listed above is a computable function. Because these three properties are
not formally stated, the Church–Turing thesis cannot be proved. The following facts are often
taken as evidence for the thesis:

 Many equivalent models of computation are known, and they all give the same
definition of computable function (or a weaker version, in some instances).

 No stronger model of computation which is generally considered to be effectively


calculable has been proposed.

The Church–Turing thesis is sometimes used in proofs to justify that a particular function is
computable by giving a concrete description of a procedure for the computation.

I.4 Notion of decidable problem

A decision problem is a question in some formal system with a yes-or-


no answer, depending on the values of some input parameters. For
example, the problem "given two numbers x and y, does x evenly divide
y?" is a decision problem. The answer can be either 'yes' or 'no', and
depends upon the values of x and y.

Decision problems typically appear in mathematical questions of


decidability, that is, the question of the existence of an effective method

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 4 By DZEUGANG Placide

to determine the existence of some object or its membership in a set; some of the most
important problems in mathematics are undecidable.

A decision problem A is called decidable or effectively solvable if A is a recursive set. A


problem is called partially decidable, semidecidable, solvable, or provable if A is a
recursively enumerable set. Problems that are not decidable are called undecidable.

The halting problem is an important undecidable decision problem;

I.5 Halting problem

One of well known unsolvable problems is the halting problem. It asks the following
question: Given an arbitrary Turing machine M over alphabet = { a , b } , and an arbitrary
string w over , does M halt when it is given w as an input ?
It can be shown that the halting problem is not decidable, hence unsolvable.

The statement that the halting problem cannot be solved by a Turing machine is one of the
most important results in computability theory, as it is an example of a concrete problem that
is both easy to formulate and impossible to solve using a Turing machine. Much of
computability theory builds on the halting problem result.

I.6 Computational complexity

The complexity theory is the set of concepts that attempts to explain complex phenomenon
not explainable by traditional (mechanistic) theories.

Computational complexity theory is a branch of the theory of computation that focuses on


classifying computational problems according to their inherent difficulty, and relating those
classes to each other.

A problem is regarded as inherently difficult if its solution requires significant resources,


whatever the algorithm used. The theory formalizes this intuition, by introducing
mathematical models of computation to study these problems and quantifying the amount of
resources needed to solve them, such as time and storage.

I.7 Classes of problems

A problem is assigned to the NP (nondeterministic polynomial time) class if it is solvable in


polynomial time by a nondeterministic Turing machine.

The class of polynomially solvable problems, P-problem (whose solution time is bounded by
a polynomial) is always also NP. If a problem is known to be NP, and a solution to the
problem is somehow known, then demonstrating the correctness of the solution can always be
reduced to a single P (polynomial time) verification. If P and NP are not equivalent, then the
solution of NP-problems requires (in the worst case) an exhaustive search.

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 5 By DZEUGANG Placide

A problem is said to be NP-hard if an algorithm for solving it can be translated into one for
solving any other NP-problem. It is much easier to show that a problem is NP than to show
that it is NP-hard. A problem which is both NP and NP-hard is called an NP-complete
problem. NP-hard problems may be of any type: decision problems, search problems, or
optimization problems.

A problem is NP-complete if it is NP and an algorithm for solving it can be translated into


one for solving any other NP-problem. Examples of NP-complete problems include the
Hamiltonian cycle and traveling salesman problems. Linear programming, thought to be an
NP-problem, was shown to actually be a P-problem by L. Khachian in 1979. It is not known
if all apparently NP-problems are actually P-problems.

Fig: Euler diagram for P, NP, NP-complete, and NP-hard set of problems

II. EFFICIENCY ANALYSIS OF AN ALGORITHM

Two or more algorithms that solve the same problem can be very different and still satisfy
these two criteria. Therefore, the next step is to determine which algorithm is "best."

The analysis of algorithms is the area of computer science that provides tools for contrasting
the efficiency of different methods of solution.

II.1 What effects the efficiency of an algorithm?

(a) computer used, the harware platform


(b) representation of abstract data types (ADT's)
(c) efficiency of compiler
(d) competence of implementer (programming skills)
(e) complexity of underlying algorithm
(f) size of the input

There are generally two criteria used to determine whether one algorithm is "better" than
another.

 Space requirements (i.e. how much memory is needed to complete the task).
 Time requirements (i.e. how much time will it take to complete the task).

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 6 By DZEUGANG Placide

A third criteria that could be considered is the cost of human time. That is, the time to
develop and maintain the program.

Algorithms cannot be compared by running them on computers. Run time is system


dependent. Even on same computer would depend on language Real time units like
microseconds not to be used.

II.2 Time for an algorithm to run t(n)

We will attempt to characterise this by the size of the input. We will try and estimate the
WORST CASE, and sometimes the BEST CASE, and very rarely the AVERAGE CASE.

Worst Case is the maximum run time, over all inputs of size n, ignoring effects (a)
through (d) above. That is, we only consider the "number of times the principle
activity of that algorithm is performed".
Best Case: In this case we look at specific instances of input of size n. For example,
we might get best behaviour from a sorting algorithm if the input to it is already
sorted.
Average Case: Arguably, average case is the most useful measure but the most
difficult to measure.

What do we measure?

In analysing an algorithm, rather than a piece of code, we will try and predict the number of
times "the principle activity" of that algorithm is performed. For example, if we are analysing
a sorting algorithm we might count the number of comparisons performed, and if it is an
algorithm to find some optimal solution, the number of times it evaluates a solution. If it is a
graph colouring algorithm we might count the number of times we check that a coloured
node is compatible with its neighbours.

II.3 Big-O Notation

The Big-O notation is a way of measuring the order of magnitude of a mathematical


expression. O(n) means on the Order of n

Suppose the worst case time for algorithm A is t(n) = n4 + 3n2 + 10 for input of size n. The
idea is to reduce the formula so that it captures the qualitative behaviour in simplest possible
terms. We eliminate any term whose contribution to the total ceases to be significant as n
becomes large. We also eliminate any constant factors, as these have no effect on the overall
pattern as n increases. Thus we may approximate f(n) above as

O (n4 + 31n2 + 10) = O( n4)

Let g(n) = n4. Then the order of f(n) is O[g(n)].

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 7 By DZEUGANG Placide

Definition: f(n) is O(g(n)) if there exist positive numbers c and N such that f(n) < = c g(n)
for all n >=N. i.e. f is big –O of g if there is c such that f is not larger than cg for
sufficiently large value of n ( greater than N)

c g(n) is an upper bound on the value of f(n). That is, the number of operations is at worst
proportional to g(n) for all large values of n.

Categorizing Performance

Asymptotic Bound Name


O(1) Constant algorithm
O(logn) Logarithmic algorithm
O(n) Linear algorithm
O(n2) Quadratic algorithm
3
O(n ) Cubic algorithm
n
O(a ) Exponential algorithm
O(n!) Factorial algorithm
NB : As the size of a problem increases, the time requirement for an exponential algorithm
usually increases too rapidly to be practical

II.4 Algorithm Analysis: Loops

Consider an n x n two dimensional array. Write a loop to store the row sums in a one-
dimensional array rows and the overall total in grandTotal.

LOOP 1:

grandTotal = 0;
for (k=0; k<n-1; ++k)
{
rows[k] = 0;
for (j = 0; j <n-1; ++j)
{
rows[k] = rows[k] + matrix[k][j];
grandTotal = grandTotal + matrix[k][j];
}
}

It takes 2n2 addition operations

LOOP 2:
grandTotal =0;
for (k=0; k<n-1; ++k)
{
rows[k] = 0;
for (j = 0; j <n-1; ++j)

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format
Topic: Computational Complexity and computability 8 By DZEUGANG Placide

rows[k] = rows[k] + matrix[k][j];


grandTotal = grandTotal + rows[k];
}

This one takes n2 + n operations

Example 1:

Use big-O notation to analyze the time efficiency of the following fragment of C code:

for(k = 1; k <= n/2; k++)


{

for (j = 1; j <= n*n; j++)
{

}
}

Since these loops are nested, the efficiency is n3/2, or O(n3) in big-O terms.

Thus, for two loops with O[f1(n)] and O[f2(n)] efficiencies, the efficiency of the nesting of
these two loops is O[f1(n) * f2(n)].

Example 2:

Use big-O notation to analyze the time efficiency of the following fragment of C code:

for (k=1; k<=n/2; k++)


{

}
for (j = 1; j <= n*n; j++)
{

}

The number of operations executed by these loops is the sum of the individual loop
efficiencies. Hence, the efficiency is n/2+n2, or O(n2) in big-O terms.

Thus, for two loops with O[f1(n)] and O[f2(n)] efficiencies, the efficiency of the sequencing
of these two loops is O[fD(n)] where fD(n) is the dominant of the functions f1(n) and f2(n).

This topic and others are available on www.placide.blog4ever.com and www.dzplacide.overblog.com in PDF format

You might also like